• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探讨人工智能是否会取代人类医生,并了解会诊来源、健康相关耻辱感以及诊断解释对患者医疗会诊评估的相互作用:随机析因实验。

Investigating Whether AI Will Replace Human Physicians and Understanding the Interplay of the Source of Consultation, Health-Related Stigma, and Explanations of Diagnoses on Patients' Evaluations of Medical Consultations: Randomized Factorial Experiment.

作者信息

Guo Weiqi, Chen Yang

机构信息

School of Foreign Languages, Renmin University of China, Beijing, China.

School of Journalism and Communication, Renmin University of China, Beijing, China.

出版信息

J Med Internet Res. 2025 Mar 5;27:e66760. doi: 10.2196/66760.

DOI:10.2196/66760
PMID:40053785
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11923482/
Abstract

BACKGROUND

The increasing use of artificial intelligence (AI) in medical diagnosis and consultation promises benefits such as greater accuracy and efficiency. However, there is little evidence to systematically test whether the ideal technological promises translate into an improved evaluation of the medical consultation from the patient's perspective. This perspective is significant because AI as a technological solution does not necessarily improve patient confidence in diagnosis and adherence to treatment at the functional level, create meaningful interactions between the medical agent and the patient at the relational level, evoke positive emotions, or reduce the patient's pessimism at the emotional level.

OBJECTIVE

This study aims to investigate, from a patient-centered perspective, whether AI or human-involved AI can replace the role of human physicians in diagnosis at the functional, relational, and emotional levels as well as how some health-related differences between human-AI and human-human interactions affect patients' evaluations of the medical consultation.

METHODS

A 3 (consultation source: AI vs human-involved AI vs human) × 2 (health-related stigma: low vs high) × 2 (diagnosis explanation: without vs with explanation) factorial experiment was conducted with 249 participants. The main effects and interaction effects of the variables were examined on individuals' functional, relational, and emotional evaluations of the medical consultation.

RESULTS

Functionally, people trusted the diagnosis of the human physician (mean 4.78-4.85, SD 0.06-0.07) more than medical AI (mean 4.34-4.55, SD 0.06-0.07) or human-involved AI (mean 4.39-4.56, SD 0.06-0.07; P<.001), but at the relational and emotional levels, there was no significant difference between human-AI and human-human interactions (P>.05). Health-related stigma had no significant effect on how people evaluated the medical consultation or contributed to preferring AI-powered systems over humans (P>.05); however, providing explanations of the diagnosis significantly improved the functional (P<.001), relational (P<.05), and emotional (P<.05) evaluations of the consultation for all 3 medical agents.

CONCLUSIONS

The findings imply that at the current stage of AI development, people trust human expertise more than accurate AI, especially for decisions traditionally made by humans, such as medical diagnosis, supporting the algorithm aversion theory. Surprisingly, even for highly stigmatized diseases such as AIDS, where we assume anonymity and privacy are preferred in medical consultations, the dehumanization of AI does not contribute significantly to the preference for AI-powered medical agents over humans, suggesting that instrumental needs of diagnosis override patient privacy concerns. Furthermore, explaining the diagnosis effectively improves treatment adherence, strengthens the physician-patient relationship, and fosters positive emotions during the consultation. This provides insights for the design of AI medical agents, which have long been criticized for lacking transparency while making highly consequential decisions. This study concludes by outlining theoretical contributions to research on health communication and human-AI interaction and discusses the implications for the design and application of medical AI.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fbf/11923482/384c49338daf/jmir_v27i1e66760_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fbf/11923482/384c49338daf/jmir_v27i1e66760_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fbf/11923482/384c49338daf/jmir_v27i1e66760_fig1.jpg
摘要

背景

人工智能(AI)在医学诊断和咨询中的应用日益广泛,有望带来更高的准确性和效率等益处。然而,几乎没有证据能系统地检验这些理想的技术承诺是否能从患者的角度转化为对医学咨询的改进评估。这一观点很重要,因为作为一种技术解决方案,人工智能不一定能在功能层面提高患者对诊断的信心和对治疗的依从性,在关系层面在医疗服务提供者与患者之间建立有意义的互动,唤起积极情绪,或在情感层面减轻患者的悲观情绪。

目的

本研究旨在从以患者为中心的角度调查人工智能或人机协作的人工智能是否能在功能、关系和情感层面取代人类医生在诊断中的作用,以及人机交互与人人交互之间一些与健康相关的差异如何影响患者对医学咨询的评估。

方法

对249名参与者进行了一项3(咨询来源:人工智能 vs 人机协作的人工智能 vs 人类)×2(与健康相关的污名:低 vs 高)×2(诊断解释:无 vs 有解释)的析因实验。研究了这些变量对个体对医学咨询的功能、关系和情感评估的主效应和交互效应。

结果

在功能上,人们对人类医生诊断的信任度(均值4.78 - 4.85,标准差0.06 - 0.07)高于医学人工智能(均值4.34 - 4.55,标准差0.06 - 0.07)或人机协作的人工智能(均值4.39 - 4.56,标准差0.06 - 0.07;P <.001),但在关系和情感层面,人机交互与人人交互之间没有显著差异(P >.05)。与健康相关的污名对人们如何评估医学咨询或导致倾向于人工智能驱动的系统而非人类没有显著影响(P >.05);然而,提供诊断解释显著改善了所有3种医疗服务提供者的咨询在功能(P <.001)、关系(P <.05)和情感(P <.05)方面的评估。

结论

研究结果表明,在人工智能发展的现阶段,人们更信任人类专业知识而非精确的人工智能,尤其是对于传统上由人类做出的决策,如医学诊断,这支持了算法厌恶理论。令人惊讶的是,即使对于像艾滋病这样高度污名化的疾病,我们假设在医学咨询中匿名和隐私更受青睐,但人工智能的非人性化并没有显著导致更倾向于人工智能驱动的医疗服务提供者而非人类,这表明诊断的工具性需求优先于患者的隐私担忧。此外,解释诊断有效地提高了治疗依从性,加强了医患关系,并在咨询过程中促进了积极情绪。这为长期以来因在做出重大决策时缺乏透明度而受到批评的人工智能医疗服务提供者的设计提供了见解。本研究最后概述了对健康传播和人机交互研究的理论贡献,并讨论了对医学人工智能设计和应用的启示。

相似文献

1
Investigating Whether AI Will Replace Human Physicians and Understanding the Interplay of the Source of Consultation, Health-Related Stigma, and Explanations of Diagnoses on Patients' Evaluations of Medical Consultations: Randomized Factorial Experiment.探讨人工智能是否会取代人类医生,并了解会诊来源、健康相关耻辱感以及诊断解释对患者医疗会诊评估的相互作用:随机析因实验。
J Med Internet Res. 2025 Mar 5;27:e66760. doi: 10.2196/66760.
2
Care to Explain? AI Explanation Types Differentially Impact Chest Radiograph Diagnostic Performance and Physician Trust in AI.需要解释吗?人工智能解释类型对胸部 X 光诊断性能和医生对人工智能的信任有不同的影响。
Radiology. 2024 Nov;313(2):e233261. doi: 10.1148/radiol.233261.
3
Patients' Trust in Artificial Intelligence-based Decision-making for Localized Prostate Cancer: Results from a Prospective Trial.患者对基于人工智能的局部前列腺癌决策的信任:一项前瞻性试验的结果。
Eur Urol Focus. 2024 Jul;10(4):654-661. doi: 10.1016/j.euf.2023.10.020. Epub 2023 Nov 1.
4
Artificial intelligence and the future of psychiatry: Insights from a global physician survey.人工智能与精神病学的未来:全球医师调查的观点。
Artif Intell Med. 2020 Jan;102:101753. doi: 10.1016/j.artmed.2019.101753. Epub 2019 Nov 18.
5
Facilitating Trust Calibration in Artificial Intelligence-Driven Diagnostic Decision Support Systems for Determining Physicians' Diagnostic Accuracy: Quasi-Experimental Study.促进人工智能驱动的诊断决策支持系统中信任校准以确定医生的诊断准确性:准实验研究。
JMIR Form Res. 2024 Nov 27;8:e58666. doi: 10.2196/58666.
6
Trust in artificial intelligence for medical diagnoses.对人工智能在医学诊断中的信任。
Prog Brain Res. 2020;253:263-282. doi: 10.1016/bs.pbr.2020.06.006. Epub 2020 Jul 2.
7
The radiologist as a physician - artificial intelligence as a way to overcome tension between the patient, technology, and referring physicians - a narrative review.放射科医生作为医生——人工智能作为缓解患者、技术和转诊医生之间紧张关系的一种方式——叙述性综述。
Rofo. 2024 Nov;196(11):1115-1124. doi: 10.1055/a-2271-0799. Epub 2024 Apr 3.
8
Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study.患者对医疗保健中人机交互的看法:实验研究。
J Med Internet Res. 2021 Nov 25;23(11):e25856. doi: 10.2196/25856.
9
Development and Validation of an Artificial Intelligence System to Optimize Clinician Review of Patient Records.人工智能系统优化临床医生审阅患者记录的开发和验证。
JAMA Netw Open. 2021 Jul 1;4(7):e2117391. doi: 10.1001/jamanetworkopen.2021.17391.
10
Application of AI Chatbot in Responding to Asynchronous Text-Based Messages From Patients With Cancer: Comparative Study.人工智能聊天机器人在回复癌症患者基于文本的异步消息中的应用:比较研究
J Med Internet Res. 2025 May 21;27:e67462. doi: 10.2196/67462.

引用本文的文献

1
Can AI match emergency physicians in managing common emergency cases? A comparative performance evaluation.在处理常见急诊病例方面,人工智能能否与急诊医生相媲美?一项比较性能评估。
BMC Emerg Med. 2025 Jul 31;25(1):142. doi: 10.1186/s12873-025-01303-y.
2
Preparing Tomorrow's Physicians: The Case for Machine Learning in Medical Education.培养明日之医:医学教育中机器学习的理由
J Med Syst. 2025 Jun 11;49(1):79. doi: 10.1007/s10916-025-02214-y.

本文引用的文献

1
Effects of Exposure to Conflicting Information About Mammography on Cancer Information Overload, Perceived Scientists' Credibility, and Perceived Journalists' Credibility.关于乳房 X 光摄影术的相互矛盾的信息对癌症信息过载、感知科学家可信度和感知记者可信度的影响。
Health Commun. 2023 Oct;38(11):2481-2490. doi: 10.1080/10410236.2022.2077163. Epub 2022 May 23.
2
Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review.医疗保健中人工智能的认知和需求以提高采用率:范围综述。
J Med Internet Res. 2022 Jan 14;24(1):e32939. doi: 10.2196/32939.
3
Beneficent dehumanization: Employing artificial intelligence and carebots to mitigate shame-induced barriers to medical care.
仁慈去人性化:利用人工智能和护理机器人减轻因羞耻感而导致的医疗障碍。
Bioethics. 2022 Feb;36(2):187-193. doi: 10.1111/bioe.12986. Epub 2021 Dec 23.
4
Health chatbots acceptability moderated by perceived stigma and severity: A cross-sectional survey.健康聊天机器人的可接受性受感知耻辱感和严重程度的调节:一项横断面调查。
Digit Health. 2021 Dec 8;7:20552076211063012. doi: 10.1177/20552076211063012. eCollection 2021 Jan-Dec.
5
Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study.患者对医疗保健中人机交互的看法:实验研究。
J Med Internet Res. 2021 Nov 25;23(11):e25856. doi: 10.2196/25856.
6
Barriers and facilitators to engagement with artificial intelligence (AI)-based chatbots for sexual and reproductive health advice: a qualitative analysis.参与基于人工智能 (AI) 的聊天机器人获取性与生殖健康建议的障碍和促进因素:定性分析。
Sex Health. 2021 Nov;18(5):385-393. doi: 10.1071/SH21123.
7
Examining the effect of explanation on satisfaction and trust in AI diagnostic systems.考察解释对人工智能诊断系统的满意度和信任度的影响。
BMC Med Inform Decis Mak. 2021 Jun 3;21(1):178. doi: 10.1186/s12911-021-01542-6.
8
What makes AI 'intelligent' and 'caring'? Exploring affect and relationality across three sites of intelligence and care.人工智能的“智能”和“关怀”体现在何处?跨越智能和关怀的三个领域探索情感和关系。
Soc Sci Med. 2021 May;277:113874. doi: 10.1016/j.socscimed.2021.113874. Epub 2021 Mar 23.
9
Role of Artificial Intelligence Applications in Real-Life Clinical Practice: Systematic Review.人工智能应用在实际临床实践中的作用:系统评价
J Med Internet Res. 2021 Apr 22;23(4):e25759. doi: 10.2196/25759.
10
Utilization of Self-Diagnosis Health Chatbots in Real-World Settings: Case Study.自我诊断健康聊天机器人在真实环境中的应用:案例研究。
J Med Internet Res. 2021 Jan 6;23(1):e19928. doi: 10.2196/19928.