Faculty of Psychology and Educational Sciences, Alexandru Ioan Cuza University, Iasi, Romania.
School of Computer Science, University of Nottingham, Nottingham, United Kingdom.
Prog Brain Res. 2020;253:263-282. doi: 10.1016/bs.pbr.2020.06.006. Epub 2020 Jul 2.
We present two online experiments investigating trust in artificial intelligence (AI) as a primary and secondary medical diagnosis tool and one experiment testing two methods to increase trust in AI. Participants in Experiment 1 read hypothetical scenarios of low and high-risk diseases, followed by two sequential diagnoses, and estimated their trust in the medical findings. In three between-participants groups, the first and second diagnoses were given by: human and AI, AI and human, and human and human doctors, respectively. In Experiment 2 we examined if people expected higher standards of performance from AI than human doctors, in order to trust AI treatment recommendations. In Experiment 3 we investigated the possibility to increase trust in AI diagnoses by: (i) informing our participants that the AI outperforms the human doctor, and (ii) nudging them to prefer AI diagnoses in a choice between AI and human doctors. Results indicate overall lower trust in AI, as well as for diagnoses of high-risk diseases. Participants trusted AI doctors less than humans for first diagnoses, and they were also less likely to trust a second opinion from an AI doctor for high risk diseases. Surprisingly, results highlight that people have comparable standards of performance for AI and human doctors and that trust in AI does not increase when people are told the AI outperforms the human doctor. Importantly, we find that the gap in trust between AI and human diagnoses is eliminated when people are nudged to select AI in a free-choice paradigm between human and AI diagnoses, with trust for AI diagnoses significantly increased when participants could choose their doctor. These findings isolate control over one's medical practitioner as a valid candidate for future trust-related medical diagnosis and highlight a solid potential path to smooth acceptance of AI diagnoses amongst patients.
我们呈现了两个在线实验,分别调查了人们对人工智能(AI)作为主要和次要医疗诊断工具的信任,以及一个测试两种增加对 AI 信任方法的实验。实验 1 的参与者阅读了低风险和高风险疾病的假设场景,然后进行了两次连续的诊断,并估计了他们对医疗发现的信任程度。在三个被试间组中,第一和第二诊断分别由:人类和 AI、AI 和人类、以及人类和人类医生做出。在实验 2 中,我们研究了人们是否期望 AI 比人类医生具有更高的性能标准,以信任 AI 的治疗建议。在实验 3 中,我们研究了通过以下两种方法增加对 AI 诊断的信任的可能性:(i)告知参与者 AI 的表现优于人类医生,以及(ii)通过在 AI 和人类医生之间的选择中引导他们更喜欢 AI 诊断。结果表明,总体上对 AI 的信任度较低,对高风险疾病的诊断也是如此。参与者对 AI 医生的第一诊断信任度低于对人类医生的信任度,对于高风险疾病,他们也不太可能信任 AI 医生的第二次诊断。令人惊讶的是,结果突出表明,人们对 AI 和人类医生的绩效标准相似,并且当人们被告知 AI 表现优于人类医生时,对 AI 的信任度并不会增加。重要的是,当人们在人类和 AI 诊断之间的自由选择范式中被引导选择 AI 时,AI 和人类诊断之间的信任差距会消除,当参与者可以选择自己的医生时,对 AI 诊断的信任会显著增加。这些发现将对自己的医疗从业者的控制孤立为未来与信任相关的医疗诊断的有效候选者,并突出了患者对 AI 诊断的顺利接受的潜在途径。
Prog Brain Res. 2020
BMC Med Inform Decis Mak. 2023-4-20
J Commun Healthc. 2022-12
J Med Internet Res. 2024-5-28
J Am Coll Emerg Physicians Open. 2025-6-5
AMIA Annu Symp Proc. 2025-5-22
J Med Internet Res. 2025-5-22