文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

对人工智能在医学诊断中的信任。

Trust in artificial intelligence for medical diagnoses.

机构信息

Faculty of Psychology and Educational Sciences, Alexandru Ioan Cuza University, Iasi, Romania.

School of Computer Science, University of Nottingham, Nottingham, United Kingdom.

出版信息

Prog Brain Res. 2020;253:263-282. doi: 10.1016/bs.pbr.2020.06.006. Epub 2020 Jul 2.


DOI:10.1016/bs.pbr.2020.06.006
PMID:32771128
Abstract

We present two online experiments investigating trust in artificial intelligence (AI) as a primary and secondary medical diagnosis tool and one experiment testing two methods to increase trust in AI. Participants in Experiment 1 read hypothetical scenarios of low and high-risk diseases, followed by two sequential diagnoses, and estimated their trust in the medical findings. In three between-participants groups, the first and second diagnoses were given by: human and AI, AI and human, and human and human doctors, respectively. In Experiment 2 we examined if people expected higher standards of performance from AI than human doctors, in order to trust AI treatment recommendations. In Experiment 3 we investigated the possibility to increase trust in AI diagnoses by: (i) informing our participants that the AI outperforms the human doctor, and (ii) nudging them to prefer AI diagnoses in a choice between AI and human doctors. Results indicate overall lower trust in AI, as well as for diagnoses of high-risk diseases. Participants trusted AI doctors less than humans for first diagnoses, and they were also less likely to trust a second opinion from an AI doctor for high risk diseases. Surprisingly, results highlight that people have comparable standards of performance for AI and human doctors and that trust in AI does not increase when people are told the AI outperforms the human doctor. Importantly, we find that the gap in trust between AI and human diagnoses is eliminated when people are nudged to select AI in a free-choice paradigm between human and AI diagnoses, with trust for AI diagnoses significantly increased when participants could choose their doctor. These findings isolate control over one's medical practitioner as a valid candidate for future trust-related medical diagnosis and highlight a solid potential path to smooth acceptance of AI diagnoses amongst patients.

摘要

我们呈现了两个在线实验,分别调查了人们对人工智能(AI)作为主要和次要医疗诊断工具的信任,以及一个测试两种增加对 AI 信任方法的实验。实验 1 的参与者阅读了低风险和高风险疾病的假设场景,然后进行了两次连续的诊断,并估计了他们对医疗发现的信任程度。在三个被试间组中,第一和第二诊断分别由:人类和 AI、AI 和人类、以及人类和人类医生做出。在实验 2 中,我们研究了人们是否期望 AI 比人类医生具有更高的性能标准,以信任 AI 的治疗建议。在实验 3 中,我们研究了通过以下两种方法增加对 AI 诊断的信任的可能性:(i)告知参与者 AI 的表现优于人类医生,以及(ii)通过在 AI 和人类医生之间的选择中引导他们更喜欢 AI 诊断。结果表明,总体上对 AI 的信任度较低,对高风险疾病的诊断也是如此。参与者对 AI 医生的第一诊断信任度低于对人类医生的信任度,对于高风险疾病,他们也不太可能信任 AI 医生的第二次诊断。令人惊讶的是,结果突出表明,人们对 AI 和人类医生的绩效标准相似,并且当人们被告知 AI 表现优于人类医生时,对 AI 的信任度并不会增加。重要的是,当人们在人类和 AI 诊断之间的自由选择范式中被引导选择 AI 时,AI 和人类诊断之间的信任差距会消除,当参与者可以选择自己的医生时,对 AI 诊断的信任会显著增加。这些发现将对自己的医疗从业者的控制孤立为未来与信任相关的医疗诊断的有效候选者,并突出了患者对 AI 诊断的顺利接受的潜在途径。

相似文献

[1]
Trust in artificial intelligence for medical diagnoses.

Prog Brain Res. 2020

[2]
Do patients prefer a human doctor, artificial intelligence, or a blend, and is this preference dependent on medical discipline? Empirical evidence and implications for medical practice.

Front Psychol. 2024-8-12

[3]
Patients' Preferences for Artificial Intelligence Applications Versus Clinicians in Disease Diagnosis During the SARS-CoV-2 Pandemic in China: Discrete Choice Experiment.

J Med Internet Res. 2021-2-23

[4]
The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions.

BMC Med Inform Decis Mak. 2023-4-20

[5]
Effects of a Differential Diagnosis List of Artificial Intelligence on Differential Diagnoses by Physicians: An Exploratory Analysis of Data from a Randomized Controlled Study.

Int J Environ Res Public Health. 2021-5-23

[6]
Intentional machines: A defence of trust in medical artificial intelligence.

Bioethics. 2022-2

[7]
Encompassing trust in medical AI from the perspective of medical students: a quantitative comparative study.

BMC Med Ethics. 2024-9-2

[8]
: An investigation of public trust in South Korea.

J Commun Healthc. 2022-12

[9]
Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care.

BMC Med Ethics. 2024-6-22

[10]
The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study.

J Med Internet Res. 2024-5-28

引用本文的文献

[1]
Impact of AI-Assisted Diagnosis on American Patients' Trust in and Intention to Seek Help From Health Care Professionals: Randomized, Web-Based Survey Experiment.

J Med Internet Res. 2025-6-18

[2]
Trust of Artificial Intelligence-Augmented Point-of-Care Ultrasound Among Pediatric Emergency Physicians.

J Am Coll Emerg Physicians Open. 2025-6-5

[3]
Situating governance and regulatory concerns for generative artificial intelligence and large language models in medical education.

NPJ Digit Med. 2025-5-27

[4]
Derivation and Experimental Performance of Standard and Novel Uncertainty Calibration Techniques.

AMIA Annu Symp Proc. 2025-5-22

[5]
Patient Reactions to Artificial Intelligence-Clinician Discrepancies: Web-Based Randomized Experiment.

J Med Internet Res. 2025-5-22

[6]
Preferences for the Use of Artificial Intelligence for Breast Cancer Screening in Australia: A Discrete Choice Experiment.

Patient. 2025-5-10

[7]
Prioritizing Trust in Podiatrists' Preference for AI in Supportive Roles Over Diagnostic Roles in Health Care: Qualitative Interview and Focus Group Study.

JMIR Hum Factors. 2025-2-21

[8]
Assessing Risk in Implementing New Artificial Intelligence Triage Tools-How Much Risk is Reasonable in an Already Risky World?

Asian Bioeth Rev. 2025-1-29

[9]
Patient perspectives on the use of digital medical devices and health data for AI-driven personalised medicine in Parkinson's Disease.

Front Neurol. 2024-12-4

[10]
Assessing the impact of information on patient attitudes toward artificial intelligence-based clinical decision support (AI/CDS): a pilot web-based SMART vignette study.

J Med Ethics. 2024-12-12

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索