• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对人工智能在医学诊断中的信任。

Trust in artificial intelligence for medical diagnoses.

机构信息

Faculty of Psychology and Educational Sciences, Alexandru Ioan Cuza University, Iasi, Romania.

School of Computer Science, University of Nottingham, Nottingham, United Kingdom.

出版信息

Prog Brain Res. 2020;253:263-282. doi: 10.1016/bs.pbr.2020.06.006. Epub 2020 Jul 2.

DOI:10.1016/bs.pbr.2020.06.006
PMID:32771128
Abstract

We present two online experiments investigating trust in artificial intelligence (AI) as a primary and secondary medical diagnosis tool and one experiment testing two methods to increase trust in AI. Participants in Experiment 1 read hypothetical scenarios of low and high-risk diseases, followed by two sequential diagnoses, and estimated their trust in the medical findings. In three between-participants groups, the first and second diagnoses were given by: human and AI, AI and human, and human and human doctors, respectively. In Experiment 2 we examined if people expected higher standards of performance from AI than human doctors, in order to trust AI treatment recommendations. In Experiment 3 we investigated the possibility to increase trust in AI diagnoses by: (i) informing our participants that the AI outperforms the human doctor, and (ii) nudging them to prefer AI diagnoses in a choice between AI and human doctors. Results indicate overall lower trust in AI, as well as for diagnoses of high-risk diseases. Participants trusted AI doctors less than humans for first diagnoses, and they were also less likely to trust a second opinion from an AI doctor for high risk diseases. Surprisingly, results highlight that people have comparable standards of performance for AI and human doctors and that trust in AI does not increase when people are told the AI outperforms the human doctor. Importantly, we find that the gap in trust between AI and human diagnoses is eliminated when people are nudged to select AI in a free-choice paradigm between human and AI diagnoses, with trust for AI diagnoses significantly increased when participants could choose their doctor. These findings isolate control over one's medical practitioner as a valid candidate for future trust-related medical diagnosis and highlight a solid potential path to smooth acceptance of AI diagnoses amongst patients.

摘要

我们呈现了两个在线实验,分别调查了人们对人工智能(AI)作为主要和次要医疗诊断工具的信任,以及一个测试两种增加对 AI 信任方法的实验。实验 1 的参与者阅读了低风险和高风险疾病的假设场景,然后进行了两次连续的诊断,并估计了他们对医疗发现的信任程度。在三个被试间组中,第一和第二诊断分别由:人类和 AI、AI 和人类、以及人类和人类医生做出。在实验 2 中,我们研究了人们是否期望 AI 比人类医生具有更高的性能标准,以信任 AI 的治疗建议。在实验 3 中,我们研究了通过以下两种方法增加对 AI 诊断的信任的可能性:(i)告知参与者 AI 的表现优于人类医生,以及(ii)通过在 AI 和人类医生之间的选择中引导他们更喜欢 AI 诊断。结果表明,总体上对 AI 的信任度较低,对高风险疾病的诊断也是如此。参与者对 AI 医生的第一诊断信任度低于对人类医生的信任度,对于高风险疾病,他们也不太可能信任 AI 医生的第二次诊断。令人惊讶的是,结果突出表明,人们对 AI 和人类医生的绩效标准相似,并且当人们被告知 AI 表现优于人类医生时,对 AI 的信任度并不会增加。重要的是,当人们在人类和 AI 诊断之间的自由选择范式中被引导选择 AI 时,AI 和人类诊断之间的信任差距会消除,当参与者可以选择自己的医生时,对 AI 诊断的信任会显著增加。这些发现将对自己的医疗从业者的控制孤立为未来与信任相关的医疗诊断的有效候选者,并突出了患者对 AI 诊断的顺利接受的潜在途径。

相似文献

1
Trust in artificial intelligence for medical diagnoses.对人工智能在医学诊断中的信任。
Prog Brain Res. 2020;253:263-282. doi: 10.1016/bs.pbr.2020.06.006. Epub 2020 Jul 2.
2
Do patients prefer a human doctor, artificial intelligence, or a blend, and is this preference dependent on medical discipline? Empirical evidence and implications for medical practice.患者更喜欢人类医生、人工智能还是两者的结合,这种偏好是否取决于医学学科?来自实证研究的证据及其对医疗实践的启示。
Front Psychol. 2024 Aug 12;15:1422177. doi: 10.3389/fpsyg.2024.1422177. eCollection 2024.
3
Patients' Preferences for Artificial Intelligence Applications Versus Clinicians in Disease Diagnosis During the SARS-CoV-2 Pandemic in China: Discrete Choice Experiment.中国 SARS-CoV-2 大流行期间,患者对人工智能应用与临床医生在疾病诊断中的偏好:离散选择实验。
J Med Internet Res. 2021 Feb 23;23(2):e22841. doi: 10.2196/22841.
4
The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions.人工智能对以患者为中心、医患关系的影响:一些问题及解决方案。
BMC Med Inform Decis Mak. 2023 Apr 20;23(1):73. doi: 10.1186/s12911-023-02162-y.
5
Effects of a Differential Diagnosis List of Artificial Intelligence on Differential Diagnoses by Physicians: An Exploratory Analysis of Data from a Randomized Controlled Study.人工智能鉴别诊断列表对医生鉴别诊断的影响:一项随机对照研究数据的探索性分析。
Int J Environ Res Public Health. 2021 May 23;18(11):5562. doi: 10.3390/ijerph18115562.
6
Intentional machines: A defence of trust in medical artificial intelligence.有意机器:对医疗人工智能的信任辩护。
Bioethics. 2022 Feb;36(2):154-161. doi: 10.1111/bioe.12891. Epub 2021 Jun 18.
7
Encompassing trust in medical AI from the perspective of medical students: a quantitative comparative study.从医学生的角度来看待对医疗 AI 的信任:一项定量比较研究。
BMC Med Ethics. 2024 Sep 2;25(1):94. doi: 10.1186/s12910-024-01092-2.
8
: An investigation of public trust in South Korea.公众对韩国的信任度调查。
J Commun Healthc. 2022 Dec;15(4):276-285. doi: 10.1080/17538068.2021.1994825. Epub 2021 Nov 2.
9
Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care.公众对医疗保健领域人工智能的看法:关注伦理问题,以实现以患者为中心的护理。
BMC Med Ethics. 2024 Jun 22;25(1):74. doi: 10.1186/s12910-024-01066-4.
10
The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study.人工智能对医患信任的影响:横断面情境研究。
J Med Internet Res. 2024 May 28;26:e50853. doi: 10.2196/50853.

引用本文的文献

1
Impact of AI-Assisted Diagnosis on American Patients' Trust in and Intention to Seek Help From Health Care Professionals: Randomized, Web-Based Survey Experiment.人工智能辅助诊断对美国患者对医疗保健专业人员的信任及寻求帮助意愿的影响:基于网络的随机调查实验。
J Med Internet Res. 2025 Jun 18;27:e66083. doi: 10.2196/66083.
2
Trust of Artificial Intelligence-Augmented Point-of-Care Ultrasound Among Pediatric Emergency Physicians.儿科急诊医生对人工智能辅助床旁超声的信任度
J Am Coll Emerg Physicians Open. 2025 Jun 5;6(4):100173. doi: 10.1016/j.acepjo.2025.100173. eCollection 2025 Aug.
3
Situating governance and regulatory concerns for generative artificial intelligence and large language models in medical education.
将生成式人工智能和大语言模型在医学教育中的治理与监管问题置于适当位置。
NPJ Digit Med. 2025 May 27;8(1):315. doi: 10.1038/s41746-025-01721-z.
4
Derivation and Experimental Performance of Standard and Novel Uncertainty Calibration Techniques.标准及新型不确定度校准技术的推导与实验性能
AMIA Annu Symp Proc. 2025 May 22;2024:212-221. eCollection 2024.
5
Patient Reactions to Artificial Intelligence-Clinician Discrepancies: Web-Based Randomized Experiment.患者对人工智能与临床医生差异的反应:基于网络的随机实验。
J Med Internet Res. 2025 May 22;27:e68823. doi: 10.2196/68823.
6
Preferences for the Use of Artificial Intelligence for Breast Cancer Screening in Australia: A Discrete Choice Experiment.澳大利亚乳腺癌筛查中使用人工智能的偏好:一项离散选择实验
Patient. 2025 May 10. doi: 10.1007/s40271-025-00742-w.
7
Prioritizing Trust in Podiatrists' Preference for AI in Supportive Roles Over Diagnostic Roles in Health Care: Qualitative Interview and Focus Group Study.在医疗保健中,优先考虑信任足病医生对人工智能在支持性角色而非诊断角色中的偏好:定性访谈和焦点小组研究。
JMIR Hum Factors. 2025 Feb 21;12:e59010. doi: 10.2196/59010.
8
Assessing Risk in Implementing New Artificial Intelligence Triage Tools-How Much Risk is Reasonable in an Already Risky World?评估实施新型人工智能分诊工具的风险——在一个已然充满风险的世界里,多大的风险是合理的?
Asian Bioeth Rev. 2025 Jan 29;17(1):187-205. doi: 10.1007/s41649-024-00348-8. eCollection 2025 Jan.
9
Patient perspectives on the use of digital medical devices and health data for AI-driven personalised medicine in Parkinson's Disease.帕金森病患者对使用数字医疗设备和健康数据进行人工智能驱动的个性化医疗的看法。
Front Neurol. 2024 Dec 4;15:1453243. doi: 10.3389/fneur.2024.1453243. eCollection 2024.
10
Assessing the impact of information on patient attitudes toward artificial intelligence-based clinical decision support (AI/CDS): a pilot web-based SMART vignette study.评估信息对患者对基于人工智能的临床决策支持(AI/CDS)态度的影响:一项基于网络的SMART vignette试点研究。
J Med Ethics. 2024 Dec 12. doi: 10.1136/jme-2024-110080.