• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。

Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.

机构信息

Department of Communication and Psychology, Aalborg University, Copenhagen, Denmark.

Visual Analysis and Perception Lab, Aalborg University, Aalborg, Denmark.

出版信息

J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.

DOI:10.2196/26611
PMID:34898454
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8713089/
Abstract

BACKGROUND

Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public's interests in such features of AI.

OBJECTIVE

This study elicited the public's preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI.

METHODS

We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents' views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios.

RESULTS

Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents' trust in health and technology, and respondents' fears and hopes regarding AI, do not play a significant role in the majority of cases.

CONCLUSIONS

The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.

摘要

背景

某些类型的人工智能(AI),即深度学习模型,在某些领域的表现可以优于医疗保健专业人员。此类模型在提高诊断、治疗和预防效果以及降低医疗成本方面具有巨大的潜力。然而,它们在某种意义上是不透明的,即其确切的推理过程无法完全解释。不同的利益相关者强调了 AI 决策透明度/可解释性的重要性。透明度/可解释性可能会以牺牲性能为代价。因此,需要制定一项公共政策来规范 AI 在医疗保健中的使用,在兼顾社会对高性能以及透明度/可解释性的利益的同时,实现两者之间的平衡。公共政策应该考虑到更广泛的公众对 AI 这些特征的兴趣。

目的

本研究旨在了解公众对 AI 决策在医疗保健中的性能和可解释性的偏好,并确定这些偏好是否取决于受访者的特征,包括对健康和技术的信任以及对 AI 的恐惧和希望。

方法

我们在丹麦成年人的代表性样本中进行了基于选择的联合调查,以了解公众对 AI 决策在医疗保健中的属性的偏好。最初的焦点小组访谈得出了 6 个在受访者对 AI 决策支持在医疗保健中的使用的看法中发挥作用的属性:(1)AI 决策的类型,(2)解释水平,(3)性能/准确性,(4)最终决策的责任,(5)歧视的可能性,以及(6)AI 应用的疾病严重程度。总共使用分数阶因子设计开发了 100 个独特的选择集。在 12 项任务的调查中,受访者被要求根据 3 种不同的情况,就 AI 系统在医院中的使用偏好进行回答。

结果

在 1678 名潜在受访者中,有 1027 名(61.2%)参与了调查。受访者认为医生对治疗决策负有最终责任是最重要的属性,占属性总权重的 46.8%,其次是决策的可解释性(27.3%)和系统是否经过歧视测试(14.8%)。其他因素,如性别、年龄、教育程度、受访者是否居住在农村或城镇、受访者对健康和技术的信任程度以及受访者对 AI 的恐惧和希望程度,在大多数情况下并不起重要作用。

结论

对公众来说最重要的 3 个因素按重要性降序排列依次是:(1)医生对诊断和治疗计划最终负责,(2)AI 决策支持可解释,以及(3)AI 系统经过了歧视测试。AI 系统在医疗保健中的使用的公共政策应该优先考虑此类 AI 系统的使用,并确保向患者提供信息。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/45e6/8713089/3ec90c200015/jmir_v23i12e26611_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/45e6/8713089/3ec90c200015/jmir_v23i12e26611_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/45e6/8713089/3ec90c200015/jmir_v23i12e26611_fig1.jpg

相似文献

1
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
2
Trading off accuracy and explainability in AI decision-making: findings from 2 citizens' juries.权衡人工智能决策中的准确性和可解释性:来自 2 个公民陪审团的发现。
J Am Med Inform Assoc. 2021 Sep 18;28(10):2128-2138. doi: 10.1093/jamia/ocab127.
3
The false hope of current approaches to explainable artificial intelligence in health care.当前医疗保健中可解释人工智能方法的虚假希望。
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
4
Digital Natives' Preferences on Mobile Artificial Intelligence Apps for Skin Cancer Diagnostics: Survey Study.数字原住民对移动人工智能皮肤癌诊断应用的偏好:调查研究。
JMIR Mhealth Uhealth. 2021 Aug 27;9(8):e22909. doi: 10.2196/22909.
5
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.可解释性在医疗保健人工智能可信性构建中的作用:术语、设计选择和评估策略的全面调查。
J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10.
6
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.人工智能在医疗保健中的可解释性:多学科视角。
BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.
7
Population preferences for AI system features across eight different decision-making contexts.人群对 AI 系统在八个不同决策情景下的功能偏好。
PLoS One. 2023 Dec 1;18(12):e0295277. doi: 10.1371/journal.pone.0295277. eCollection 2023.
8
Explainability in medicine in an era of AI-based clinical decision support systems.基于人工智能的临床决策支持系统时代的医学可解释性。
Front Genet. 2022 Sep 19;13:903600. doi: 10.3389/fgene.2022.903600. eCollection 2022.
9
Acceptance of Medical Artificial Intelligence in Skin Cancer Screening: Choice-Based Conjoint Survey.医学人工智能在皮肤癌筛查中的接受度:基于选择的联合调查。
JMIR Form Res. 2024 Jan 12;8:e46402. doi: 10.2196/46402.
10
The four dimensions of contestable AI diagnostics - A patient-centric approach to explainable AI.可争辩 AI 诊断的四个维度 - 以患者为中心的可解释 AI 方法。
Artif Intell Med. 2020 Jul;107:101901. doi: 10.1016/j.artmed.2020.101901. Epub 2020 Jun 9.

引用本文的文献

1
The need for patient rights in AI-driven healthcare - risk-based regulation is not enough.人工智能驱动的医疗保健中患者权利的必要性——基于风险的监管是不够的。
J R Soc Med. 2025 Jun 25:1410768251344707. doi: 10.1177/01410768251344707.
2
Attitudes Toward AI Usage in Patient Health Care: Evidence From a Population Survey Vignette Experiment.对人工智能在患者医疗保健中应用的态度:来自一项人口调查情景实验的证据。
J Med Internet Res. 2025 May 27;27:e70179. doi: 10.2196/70179.
3
The Evolving Landscape of Discrete Choice Experiments in Health Economics: A Systematic Review.

本文引用的文献

1
Perceptions of Artificial Intelligence Among Healthcare Staff: A Qualitative Survey Study.医护人员对人工智能的认知:一项定性调查研究。
Front Artif Intell. 2020 Oct 21;3:578983. doi: 10.3389/frai.2020.578983. eCollection 2020.
2
A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis.深度学习在医学影像疾病检测方面的性能与医疗保健专业人员的比较:系统评价和荟萃分析。
Lancet Digit Health. 2019 Oct;1(6):e271-e297. doi: 10.1016/S2589-7500(19)30123-2. Epub 2019 Sep 25.
3
Clinician and computer: a study on patient perceptions of artificial intelligence in skeletal radiography.
健康经济学中离散选择实验的发展态势:一项系统综述
Pharmacoeconomics. 2025 May 21. doi: 10.1007/s40273-025-01495-y.
4
A choice based conjoint analysis of mobile healthcare application preferences among physicians, patients, and individuals.医生、患者和个人对移动医疗应用偏好的基于选择的联合分析。
NPJ Digit Med. 2025 May 3;8(1):244. doi: 10.1038/s41746-025-01610-5.
5
Receiving Information on Machine Learning-Based Clinical Decision Support Systems in Psychiatric Services Increases Staff Trust in These Systems: A Randomized Survey Experiment.在精神科服务中接收基于机器学习的临床决策支持系统的信息可增强工作人员对这些系统的信任:一项随机调查实验。
Acta Psychiatr Scand. 2025 Feb 11;152(1):39-48. doi: 10.1111/acps.13791.
6
Attitudes toward artificial intelligence and robots in healthcare in the general population: a qualitative study.普通人群对医疗保健领域人工智能和机器人的态度:一项定性研究。
Front Digit Health. 2025 Jan 27;7:1458685. doi: 10.3389/fdgth.2025.1458685. eCollection 2025.
7
Exploring Stakeholder Perceptions about Using Artificial Intelligence for the Diagnosis of Rare and Atypical Infections.探索利益相关者对使用人工智能诊断罕见和非典型感染的看法。
Appl Clin Inform. 2025 Jan;16(1):223-233. doi: 10.1055/a-2451-9046. Epub 2024 Oct 25.
8
Patient trust in the use of machine learning-based clinical decision support systems in psychiatric services: A randomized survey experiment.患者对精神科服务中基于机器学习的临床决策支持系统的使用信任:一项随机调查实验。
Eur Psychiatry. 2024 Oct 25;67(1):e72. doi: 10.1192/j.eurpsy.2024.1790.
9
Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety.医疗保健中的人工智能:对患者权利和安全的感知威胁的范围综述
Arch Public Health. 2024 Oct 23;82(1):188. doi: 10.1186/s13690-024-01414-1.
10
Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context.范围审查揭示了医疗保健背景下人工智能中“责任”概念所固有的动态性和复杂性。
Asian Bioeth Rev. 2024 Jun 11;16(3):315-344. doi: 10.1007/s41649-024-00292-7. eCollection 2024 Jul.
临床医生和计算机:骨骼放射摄影中患者对人工智能看法的研究。
BMJ Health Care Inform. 2020 Nov;27(3). doi: 10.1136/bmjhci-2020-100233.
4
Artificial Intelligence in Screening Mammography: A Population Survey of Women's Preferences.人工智能在乳腺 X 光筛查中的应用:一项针对女性偏好的人群调查。
J Am Coll Radiol. 2021 Jan;18(1 Pt A):79-86. doi: 10.1016/j.jacr.2020.09.042. Epub 2020 Oct 12.
5
The four dimensions of contestable AI diagnostics - A patient-centric approach to explainable AI.可争辩 AI 诊断的四个维度 - 以患者为中心的可解释 AI 方法。
Artif Intell Med. 2020 Jul;107:101901. doi: 10.1016/j.artmed.2020.101901. Epub 2020 Jun 9.
6
Use of AI-based tools for healthcare purposes: a survey study from consumers' perspectives.基于人工智能的工具在医疗保健方面的应用:一项来自消费者视角的调查研究。
BMC Med Inform Decis Mak. 2020 Jul 22;20(1):170. doi: 10.1186/s12911-020-01191-1.
7
Public Perception of Artificial Intelligence in Medical Care: Content Analysis of Social Media.公众对医疗人工智能的看法:社交媒体的内容分析。
J Med Internet Res. 2020 Jul 13;22(7):e16649. doi: 10.2196/16649.
8
Public Perceptions of Artificial Intelligence and Robotics in Medicine.公众对人工智能和医学机器人的看法。
J Endourol. 2020 Oct;34(10):1041-1048. doi: 10.1089/end.2020.0137. Epub 2020 Sep 29.
9
Health Care Employees' Perceptions of the Use of Artificial Intelligence Applications: Survey Study.医疗保健员工对人工智能应用使用情况的认知:调查研究
J Med Internet Res. 2020 May 14;22(5):e17620. doi: 10.2196/17620.
10
Patient Perspectives on the Use of Artificial Intelligence for Skin Cancer Screening: A Qualitative Study.患者对人工智能用于皮肤癌筛查的看法:一项定性研究。
JAMA Dermatol. 2020 May 1;156(5):501-512. doi: 10.1001/jamadermatol.2019.5014.