• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

肿瘤学家对人工智能在癌症治疗中的伦理影响的看法。

Perspectives of Oncologists on the Ethical Implications of Using Artificial Intelligence for Cancer Care.

机构信息

Divsion of Population Sciences, Dana-Farber Cancer Institute, Boston, Massachusetts.

Harvard Medical School, Boston, Massachusetts.

出版信息

JAMA Netw Open. 2024 Mar 4;7(3):e244077. doi: 10.1001/jamanetworkopen.2024.4077.

DOI:10.1001/jamanetworkopen.2024.4077
PMID:38546644
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10979310/
Abstract

IMPORTANCE

Artificial intelligence (AI) tools are rapidly integrating into cancer care. Understanding stakeholder views on ethical issues associated with the implementation of AI in oncology is critical to optimal deployment.

OBJECTIVE

To evaluate oncologists' views on the ethical domains of the use of AI in clinical care, including familiarity, predictions, explainability (the ability to explain how a result was determined), bias, deference, and responsibilities.

DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional, population-based survey study was conducted from November 15, 2022, to July 31, 2023, among 204 US-based oncologists identified using the National Plan & Provider Enumeration System.

MAIN OUTCOMES AND MEASURES

The primary outcome was response to a question asking whether participants agreed or disagreed that patients need to provide informed consent for AI model use during cancer treatment decisions.

RESULTS

Of 387 surveys, 204 were completed (response rate, 52.7%). Participants represented 37 states, 120 (63.7%) identified as male, 128 (62.7%) as non-Hispanic White, and 60 (29.4%) were from academic practices; 95 (46.6%) had received some education on AI use in health care, and 45.3% (92 of 203) reported familiarity with clinical decision models. Most participants (84.8% [173 of 204]) reported that AI-based clinical decision models needed to be explainable by oncologists to be used in the clinic; 23.0% (47 of 204) stated they also needed to be explainable by patients. Patient consent for AI model use during treatment decisions was supported by 81.4% of participants (166 of 204). When presented with a scenario in which an AI decision model selected a different treatment regimen than the oncologist planned to recommend, the most common response was to present both options and let the patient decide (36.8% [75 of 204]); respondents from academic settings were more likely than those from other settings to let the patient decide (OR, 2.56; 95% CI, 1.19-5.51). Most respondents (90.7% [185 of 204]) reported that AI developers were responsible for the medico-legal problems associated with AI use. Some agreed that this responsibility was shared by physicians (47.1% [96 of 204]) or hospitals (43.1% [88 of 204]). Finally, most respondents (76.5% [156 of 204]) agreed that oncologists should protect patients from biased AI tools, but only 27.9% (57 of 204) were confident in their ability to identify poorly representative AI models.

CONCLUSIONS AND RELEVANCE

In this cross-sectional survey study, few oncologists reported that patients needed to understand AI models, but most agreed that patients should consent to their use, and many tasked patients with choosing between physician- and AI-recommended treatment regimens. These findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions as well as decisional responsibility when problems related to AI use arise.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d5/10979310/0f0dc89c6030/jamanetwopen-e244077-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d5/10979310/348eb95c4046/jamanetwopen-e244077-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d5/10979310/0f0dc89c6030/jamanetwopen-e244077-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d5/10979310/348eb95c4046/jamanetwopen-e244077-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d5/10979310/0f0dc89c6030/jamanetwopen-e244077-g002.jpg
摘要

重要性

人工智能 (AI) 工具正在迅速融入癌症护理。了解利益相关者对与肿瘤学中 AI 实施相关的伦理问题的看法,对于优化 AI 的部署至关重要。

目的

评估肿瘤学家对 AI 在临床护理中使用的伦理领域的看法,包括熟悉程度、预测、可解释性(解释结果如何确定的能力)、偏见、尊重和责任。

设计、地点和参与者:这项横断面、基于人群的调查研究于 2022 年 11 月 15 日至 2023 年 7 月 31 日在全美范围内进行,共纳入了通过国家计划和提供者登记系统确定的 204 名美国肿瘤学家。

主要结果和措施

主要结果是对参与者是否同意患者在癌症治疗决策中需要对 AI 模型的使用提供知情同意的问题的回答。

结果

在完成的 387 份调查中,有 204 份完成(应答率为 52.7%)。参与者代表了 37 个州,120 名(63.7%)为男性,128 名(62.7%)为非西班牙裔白人,60 名(29.4%)来自学术实践;95 名(46.6%)接受过一些关于医疗保健中 AI 使用的教育,45.3%(203 名中的 92 名)报告对临床决策模型有一定了解。大多数参与者(84.8%[204 名中的 173 名])表示,基于 AI 的临床决策模型需要由肿瘤学家来解释,以便在诊所中使用;23.0%(47 名中的 204 名)表示他们也需要由患者来解释。81.4%的参与者(204 名中的 166 名)支持在治疗决策中使用 AI 模型。当呈现 AI 决策模型选择与肿瘤学家计划推荐的不同治疗方案的情况时,最常见的反应是同时呈现两种方案并让患者决定(36.8%[204 名中的 75 名]);来自学术环境的受访者比其他环境的受访者更有可能让患者决定(比值比,2.56;95%置信区间,1.19-5.51)。大多数受访者(90.7%[204 名中的 185 名])表示,AI 开发者应对与 AI 使用相关的医法问题负责。一些人同意医生(47.1%[204 名中的 96 名])或医院(43.1%[204 名中的 88 名])也应分担这一责任。最后,大多数受访者(76.5%[204 名中的 156 名])同意肿瘤学家应保护患者免受有偏见的 AI 工具的影响,但只有 27.9%(204 名中的 57 名)有信心能够识别代表性不足的 AI 模型。

结论和相关性

在这项横断面调查研究中,很少有肿瘤学家表示患者需要了解 AI 模型,但大多数人同意患者应该同意使用这些模型,许多人让患者在医生和 AI 推荐的治疗方案之间做出选择。这些发现表明,在肿瘤学中实施 AI 必须包括对其对护理决策的影响进行严格评估,以及在出现与 AI 使用相关的问题时确定决策责任。

相似文献

1
Perspectives of Oncologists on the Ethical Implications of Using Artificial Intelligence for Cancer Care.肿瘤学家对人工智能在癌症治疗中的伦理影响的看法。
JAMA Netw Open. 2024 Mar 4;7(3):e244077. doi: 10.1001/jamanetworkopen.2024.4077.
2
Chinese Oncologists' Perspectives on Integrating AI into Clinical Practice: Cross-Sectional Survey Study.中国肿瘤学家对将人工智能融入临床实践的看法:横断面调查研究
JMIR Form Res. 2024 Jun 5;8:e53918. doi: 10.2196/53918.
3
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
4
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
5
Are medical oncologists ready for the artificial intelligence revolution? Evaluation of the opinions, knowledge, and experiences of medical oncologists about artificial intelligence technologies.医学肿瘤学家是否已准备好迎接人工智能革命?评估医学肿瘤学家对人工智能技术的意见、知识和经验。
Med Oncol. 2023 Oct 9;40(11):327. doi: 10.1007/s12032-023-02200-9.
6
Future Use of AI in Diagnostic Medicine: 2-Wave Cross-Sectional Survey Study.人工智能在诊断医学中的未来应用:两波横断面调查研究。
J Med Internet Res. 2025 Feb 27;27:e53892. doi: 10.2196/53892.
7
Expectations of Intensive Care Physicians Regarding an AI-Based Decision Support System for Weaning From Continuous Renal Replacement Therapy: Predevelopment Survey Study.重症监护医师对基于人工智能的持续肾脏替代治疗撤机决策支持系统的期望:开发前调查研究
JMIR Med Inform. 2025 Apr 23;13:e63709. doi: 10.2196/63709.
8
Navigating the ethical landscape of artificial intelligence in radiography: a cross-sectional study of radiographers' perspectives.医学影像学中人工智能伦理问题的探索:放射技师观点的横断面研究。
BMC Med Ethics. 2024 May 11;25(1):52. doi: 10.1186/s12910-024-01052-w.
9
Analysis of Patient-Physician Concordance in the Understanding of Chemotherapy Treatment Plans Among Patients With Cancer.癌症患者对化疗治疗计划理解的医患一致性分析。
JAMA Netw Open. 2020 Mar 2;3(3):e200341. doi: 10.1001/jamanetworkopen.2020.0341.
10
Awareness and Attitude Toward Artificial Intelligence Among Medical Students and Pathology Trainees: Survey Study.医学生和病理学实习生对人工智能的认知与态度:调查研究
JMIR Med Educ. 2025 Jan 10;11:e62669. doi: 10.2196/62669.

引用本文的文献

1
Therapeutic breakthroughs in oncology: Enhancing treatment and management.肿瘤学的治疗突破:加强治疗与管理。
Can Oncol Nurs J. 2025 Jul 1;35(4):590-605. doi: 10.5737/23688076354590. eCollection 2025.
2
Application of artificial intelligence in medical imaging for tumor diagnosis and treatment: a comprehensive approach.人工智能在肿瘤诊断与治疗的医学成像中的应用:一种综合方法。
Discov Oncol. 2025 Aug 26;16(1):1625. doi: 10.1007/s12672-025-03307-3.
3
Artificial Intelligence and Decision-Making in Oncology: A Review of Ethical, Legal, and Informed Consent Challenges.

本文引用的文献

1
AI in Medicine-JAMA's Focus on Clinical Outcomes, Patient-Centered Care, Quality, and Equity.医学中的人工智能——《美国医学会杂志》对临床结果、以患者为中心的护理、质量和公平性的关注。
JAMA. 2023 Sep 5;330(9):818-820. doi: 10.1001/jama.2023.15481.
2
A Process Framework for Ethically Deploying Artificial Intelligence in Oncology.肿瘤学中人工智能伦理应用的过程框架
J Clin Oncol. 2022 Dec 1;40(34):3907-3911. doi: 10.1200/JCO.22.01113. Epub 2022 Jul 18.
3
2022 Snapshot: State of the Oncology Workforce in America.2022年概况:美国肿瘤学劳动力状况
肿瘤学中的人工智能与决策制定:伦理、法律及知情同意挑战综述
Curr Oncol Rep. 2025 Jun 17. doi: 10.1007/s11912-025-01698-8.
4
New horizons at the interface of artificial intelligence and translational cancer research.人工智能与转化性癌症研究交叉领域的新视野。
Cancer Cell. 2025 Apr 14;43(4):708-727. doi: 10.1016/j.ccell.2025.03.018.
5
Artificial Intelligence in Relation to Accurate Information and Tasks in Gynecologic Oncology and Clinical Medicine-Dunning-Kruger Effects and Ultracrepidarianism.人工智能与妇科肿瘤学和临床医学中的准确信息及任务——邓宁-克鲁格效应和不懂装懂。
Diagnostics (Basel). 2025 Mar 15;15(6):735. doi: 10.3390/diagnostics15060735.
6
Artificial intelligence in neurovascular decision-making: a comparative analysis of ChatGPT-4 and multidisciplinary expert recommendations for unruptured intracranial aneurysms.人工智能在神经血管决策中的应用:ChatGPT-4与颅内未破裂动脉瘤多学科专家建议的比较分析
Neurosurg Rev. 2025 Feb 21;48(1):261. doi: 10.1007/s10143-025-03341-3.
7
Global research trends in the application of artificial intelligence in oncology care: a bibliometric study.人工智能在肿瘤护理中应用的全球研究趋势:一项文献计量学研究。
Front Oncol. 2025 Jan 7;14:1456144. doi: 10.3389/fonc.2024.1456144. eCollection 2024.
8
Beyond the hype: Navigating bias in AI-driven cancer detection.超越炒作:应对人工智能驱动的癌症检测中的偏差
Oncotarget. 2024 Nov 7;15:764-766. doi: 10.18632/oncotarget.28665.
9
Research progress on machine algorithm prediction of liver cancer prognosis after intervention therapy.肝癌介入治疗后预后的机器算法预测研究进展
Am J Cancer Res. 2024 Sep 25;14(9):4580-4596. doi: 10.62347/BEAO1926. eCollection 2024.
10
Machine learning in oncological pharmacogenomics: advancing personalized chemotherapy.肿瘤药理学中的机器学习:推进个性化化疗。
Funct Integr Genomics. 2024 Oct 4;24(5):182. doi: 10.1007/s10142-024-01462-4.
JCO Oncol Pract. 2022 May;18(5):396. doi: 10.1200/OP.22.00168.
4
The false hope of current approaches to explainable artificial intelligence in health care.当前医疗保健中可解释人工智能方法的虚假希望。
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
5
Ethics and standards in the use of artificial intelligence in medicine on behalf of the Royal Australian and New Zealand College of Radiologists.代表澳大利亚皇家放射学会和新西兰放射学会,在医学中使用人工智能的伦理和标准。
J Med Imaging Radiat Oncol. 2021 Aug;65(5):486-494. doi: 10.1111/1754-9485.13289.
6
Responsibility beyond design: Physicians' requirements for ethical medical AI.超越设计的责任:医生对医疗 AI 的伦理要求。
Bioethics. 2022 Feb;36(2):162-169. doi: 10.1111/bioe.12887. Epub 2021 Jun 5.
7
A Consensus-Based Checklist for Reporting of Survey Studies (CROSS).基于共识的调查研究报告清单(CROSS)
J Gen Intern Med. 2021 Oct;36(10):3179-3187. doi: 10.1007/s11606-021-06737-1. Epub 2021 Apr 22.
8
State of Physician and Pharmacist Oncology Workforce in the United States in 2019.2019 年美国医师和药师肿瘤学劳动力状况。
JCO Oncol Pract. 2021 Jan;17(1):e1-e10. doi: 10.1200/OP.20.00600. Epub 2020 Dec 3.
9
Artificial Intelligence in Health Care: Will the Value Match the Hype?医疗保健中的人工智能:其价值能否与炒作相匹配?
JAMA. 2019 Jun 18;321(23):2281-2282. doi: 10.1001/jama.2019.4914.
10
The Results Are Only as Good as the Sample: Assessing Three National Physician Sampling Frames.结果仅取决于样本:评估三种全国医师抽样框架。
J Gen Intern Med. 2015 Aug;30 Suppl 3(Suppl 3):S595-601. doi: 10.1007/s11606-015-3380-9.