人工智能辅助诊断对美国患者对医疗保健专业人员的信任及寻求帮助意愿的影响:基于网络的随机调查实验。

Impact of AI-Assisted Diagnosis on American Patients' Trust in and Intention to Seek Help From Health Care Professionals: Randomized, Web-Based Survey Experiment.

作者信息

Chen Catherine, Cui Zhihan

机构信息

Manship School of Mass Communication, Louisiana State University, Baton Rouge, LA, United States.

Department of Political Science, Louisiana State University, Baton Rouge, LA, United States.

出版信息

J Med Internet Res. 2025 Jun 18;27:e66083. doi: 10.2196/66083.

Abstract

BACKGROUND

Artificial intelligence (AI) technologies are increasingly integrated into medical practice, with AI-assisted diagnosis showing promise. However, patient acceptance of AI-assisted diagnosis, compared with human-only procedures, remains understudied, especially in the wake of generative AI advancements such as ChatGPT.

OBJECTIVE

This research examines patient preferences for doctors using AI assistance versus those relying solely on human expertise. It also studies demographic, social, and experiential factors influencing these preferences.

METHODS

We conducted a preregistered 4-group randomized survey experiment among a national sample representative of the US population on several demographic benchmarks (n=1762). Participants viewed identical doctor profiles, with varying AI usage descriptions: no AI mention (control, n=421), explicit nonuse (No AI, n=435), moderate use (Moderate AI, n=481), and extensive use (Extensive AI, n=425). Respondents reported their tendency to seek help, trust in the doctor as a person and a professional, knowledge of AI, frequency of using AI in their daily lives, demographics, and partisan identification. We analyzed the results with ordinary least squares regression (controlling for sociodemographic factors), mediation analysis, and moderation analysis. We also explored the moderating effect of past AI experiences on the tendency to seek help and trust in the doctor.

RESULTS

Mentioning that the doctor uses AI to assist in diagnosis consistently decreased trust and intention to seek help. Trust and intention to seek help (measured with a 5-point Likert scale and coded as 0-1 with equal intervals in between) were highest when AI was explicitly absent (control group: mean 0.50; No AI group: mean 0.63) and lowest when AI was extensively used (Extensive AI group: mean 0.30; Moderate AI group: mean 0.34). A linear regression controlling for demographics suggested that the negative effect of AI assistance was significant with a large effect size (β=-.45, 95% CI -0.49 to -0.40, t1740=-20.81; P<.001). This pattern was consistent for trust in the doctor as a person (β=-.33, 95% CI -0.37 to -0.28, t1733=-14.41; P<.001) and as a professional (β=-.40, 95% CI -0.45 to -0.36, t1735=-18.54; P<.001). Results were consistent across age, gender, education, and partisanship, indicating a broad aversion to AI-assisted diagnosis. Moderation analyses suggested that the "AI trust gap" shrank as AI use frequency increased (interaction term: β=.09, 95% CI 0.04-0.13, t1735=4.06; P<.001) but expanded as self-reported knowledge increased (interaction term: β=-.04, 95% CI -0.08 to 0.00, t1736=-1.75; P=.08).

CONCLUSIONS

Despite AI's growing role in medicine, patients still prefer human-only expertise, regardless of partisanship and demographics, underscoring the need for strategies to build trust in AI technologies in health care.

摘要

背景

人工智能(AI)技术越来越多地融入医疗实践,人工智能辅助诊断显示出前景。然而,与仅由人类进行的诊断程序相比,患者对人工智能辅助诊断的接受度仍未得到充分研究,尤其是在ChatGPT等生成式人工智能取得进展之后。

目的

本研究调查患者对使用人工智能辅助的医生与仅依赖人类专业知识的医生的偏好。它还研究影响这些偏好的人口统计学、社会和经验因素。

方法

我们在美国全国代表性样本中进行了一项预先注册的4组随机调查实验,该样本在几个人口统计学基准方面具有代表性(n = 1762)。参与者查看了相同的医生简介,但人工智能使用描述各不相同:未提及人工智能(对照组,n = 421)、明确不使用(不使用人工智能,n = 435)、适度使用(适度使用人工智能,n = 481)和广泛使用(广泛使用人工智能,n = 425)。受访者报告了他们寻求帮助的倾向、对医生作为个人和专业人士的信任、对人工智能的了解、在日常生活中使用人工智能的频率、人口统计学信息以及党派认同。我们使用普通最小二乘法回归(控制社会人口统计学因素)、中介分析和调节分析来分析结果。我们还探讨了过去的人工智能体验对寻求帮助的倾向和对医生信任的调节作用。

结果

提及医生使用人工智能辅助诊断持续降低了信任度和寻求帮助意愿。当明确不使用人工智能时,信任度和寻求帮助意愿(使用5点李克特量表测量,编码为0 - 1,中间等距)最高(对照组:平均值0.50;不使用人工智能组:平均值0.63),而当广泛使用人工智能时最低(广泛使用人工智能组:平均值0.30;适度使用人工智能组:平均值0.34)。控制人口统计学因素的线性回归表明,人工智能辅助的负面影响具有显著的大效应量(β = -0.45,95%置信区间 -0.49至 -0.40,t1740 = -20.81;P <.001)。这种模式在对医生作为个人的信任(β = -0.33,95%置信区间 -0.37至 -0.28,t1733 = -14.41;P <.001)和作为专业人士的信任(β = -0.40,95%置信区间 -0.45至 -0.36,t1735 = -第18.54页;P <.001)方面是一致的。结果在年龄、性别、教育程度和党派方面都是一致的,表明对人工智能辅助诊断存在广泛的反感。调节分析表明,“人工智能信任差距”随着人工智能使用频率的增加而缩小(交互项:β = 0.09,95%置信区间0.04 - 0.13,t1735 = 4.06;P <.001),但随着自我报告的知识增加而扩大(交互项:β = -0.04,95%置信区间 -0.08至0.00,t1736 = -1.75;P = 0.08)。

结论

尽管人工智能在医学中的作用日益增强,但患者仍然更喜欢仅由人类提供的专业知识,无论党派和人口统计学如何,这凸显了在医疗保健中建立对人工智能技术信任的策略的必要性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索