• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相信人工智能参与对数字医疗建议感知的影响。

Influence of believed AI involvement on the perception of digital medical advice.

机构信息

Institute of Psychology, Julius-Maximilians-Universität Würzburg, Würzburg, Germany.

Judge Business School, University of Cambridge, Cambridge, UK.

出版信息

Nat Med. 2024 Nov;30(11):3098-3100. doi: 10.1038/s41591-024-03180-7. Epub 2024 Jul 25.

DOI:10.1038/s41591-024-03180-7
PMID:39054373
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11564086/
Abstract

Large language models offer novel opportunities to seek digital medical advice. While previous research primarily addressed the performance of such artificial intelligence (AI)-based tools, public perception of these advancements received little attention. In two preregistered studies (n = 2,280), we presented participants with scenarios of patients obtaining medical advice. All participants received identical information, but we manipulated the putative source of this advice ('AI', 'human physician', 'human + AI'). 'AI'- and 'human + AI'-labeled advice was evaluated as significantly less reliable and less empathetic compared with 'human'-labeled advice. Moreover, participants indicated lower willingness to follow the advice when AI was believed to be involved in advice generation. Our findings point toward an anti-AI bias when receiving digital medical advice, even when AI is supposedly supervised by physicians. Given the tremendous potential of AI for medicine, elucidating ways to counteract this bias should be an important objective of future research.

摘要

大型语言模型为寻求数字医疗建议提供了新的机会。虽然之前的研究主要关注这些人工智能(AI)工具的性能,但公众对这些进展的看法却很少受到关注。在两项预先注册的研究中(n=2280),我们向参与者展示了患者获得医疗建议的场景。所有参与者都收到了相同的信息,但我们操纵了所谓的建议来源(“AI”、“人类医生”、“人类+AI”)。与“人类”标签的建议相比,“AI”和“人类+AI”标签的建议被评估为可靠性和同理心显著降低。此外,当参与者认为 AI 参与建议生成时,他们表示更不愿意遵循建议。我们的研究结果表明,即使 AI 是由医生监管的,在收到数字医疗建议时也会出现反 AI 偏见。鉴于 AI 在医学上的巨大潜力,阐明克服这种偏见的方法应该是未来研究的一个重要目标。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/371c/11564086/b32b20ab9917/41591_2024_3180_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/371c/11564086/f1b8b8241011/41591_2024_3180_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/371c/11564086/b32b20ab9917/41591_2024_3180_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/371c/11564086/f1b8b8241011/41591_2024_3180_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/371c/11564086/b32b20ab9917/41591_2024_3180_Fig2_HTML.jpg

相似文献

1
Influence of believed AI involvement on the perception of digital medical advice.相信人工智能参与对数字医疗建议感知的影响。
Nat Med. 2024 Nov;30(11):3098-3100. doi: 10.1038/s41591-024-03180-7. Epub 2024 Jul 25.
2
Care to Explain? AI Explanation Types Differentially Impact Chest Radiograph Diagnostic Performance and Physician Trust in AI.需要解释吗?人工智能解释类型对胸部 X 光诊断性能和医生对人工智能的信任有不同的影响。
Radiology. 2024 Nov;313(2):e233261. doi: 10.1148/radiol.233261.
3
Me vs. the machine? Subjective evaluations of human- and AI-generated advice.我与机器?对人类和人工智能生成的建议的主观评估。
Sci Rep. 2025 Feb 1;15(1):3980. doi: 10.1038/s41598-025-86623-6.
4
Public Perception on Artificial Intelligence-Driven Mental Health Interventions: Survey Research.公众对人工智能驱动的心理健康干预措施的看法:调查研究。
JMIR Form Res. 2024 Nov 28;8:e64380. doi: 10.2196/64380.
5
Investigating Older Adults' Perceptions of AI Tools for Medication Decisions: Vignette-Based Experimental Survey.调查老年人对用于药物决策的人工智能工具的看法:基于 vignette 的实验性调查。
J Med Internet Res. 2024 Dec 16;26:e60794. doi: 10.2196/60794.
6
Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays.非专业医师在查看 X 光片时受益于可解释 AI 的正确建议。
Sci Rep. 2023 Jan 25;13(1):1383. doi: 10.1038/s41598-023-28633-w.
7
Allied Health Professionals' Perceptions of Artificial Intelligence in the Clinical Setting: Cross-Sectional Survey.专职医疗专业人员对临床环境中人工智能的认知:横断面调查
JMIR Form Res. 2024 Dec 30;8:e57204. doi: 10.2196/57204.
8
Perceived Trust and Professional Identity Threat in AI-Based Clinical Decision Support Systems: Scenario-Based Experimental Study on AI Process Design Features.基于人工智能的临床决策支持系统中的感知信任与职业身份威胁:关于人工智能流程设计特征的情景式实验研究
JMIR Form Res. 2025 Mar 26;9:e64266. doi: 10.2196/64266.
9
Readiness, knowledge, and perception towards artificial intelligence of medical students at faculty of medicine, Pelita Harapan University, Indonesia: a cross sectional study.印度尼西亚 Pelita Harapan 大学医学院医学生对人工智能的准备情况、知识和认知:一项横断面研究。
BMC Med Educ. 2024 Sep 27;24(1):1044. doi: 10.1186/s12909-024-06058-x.
10
Catenation between mHealth application advertisements and cardiovascular diseases: moderation of artificial intelligence (AI)-enabled internet of things, digital divide, and individual trust.移动健康应用广告与心血管疾病之间的关联:人工智能驱动的物联网、数字鸿沟和个人信任的调节作用
BMC Public Health. 2025 Mar 19;25(1):1064. doi: 10.1186/s12889-025-22082-y.

引用本文的文献

1
The imitation game: large language models versus multidisciplinary tumor boards: benchmarking AI against 21 sarcoma centers from the ring trial.模仿游戏:大语言模型与多学科肿瘤专家委员会:将人工智能与环试验中的21个肉瘤中心进行对比测试
J Cancer Res Clin Oncol. 2025 Sep 10;151(9):248. doi: 10.1007/s00432-025-06304-9.
2
Public Perception of Physicians Who Use Artificial Intelligence.公众对使用人工智能的医生的看法。
JAMA Netw Open. 2025 Jul 1;8(7):e2521643. doi: 10.1001/jamanetworkopen.2025.21643.
3
Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship.

本文引用的文献

1
Exploring factors influencing user perspective of ChatGPT as a technology that assists in healthcare decision making: A cross sectional survey study.探讨影响用户将 ChatGPT 视为医疗保健决策辅助技术的因素:一项横断面调查研究。
PLoS One. 2024 Mar 8;19(3):e0296151. doi: 10.1371/journal.pone.0296151. eCollection 2024.
2
Security Implications of AI Chatbots in Health Care.人工智能聊天机器人在医疗保健中的安全隐患。
J Med Internet Res. 2023 Nov 28;25:e47551. doi: 10.2196/47551.
3
ChatGPT-Generated Differential Diagnosis Lists for Complex Case-Derived Clinical Vignettes: Diagnostic Accuracy Evaluation.
人工智能与人类专家:持牌心理健康临床医生对人工智能生成的和专家提供的关于质量、同理心及感知作者身份的心理建议进行的盲法评估
Internet Interv. 2025 Jun 3;41:100841. doi: 10.1016/j.invent.2025.100841. eCollection 2025 Sep.
4
Interacting with fallible AI: is distrust helpful when receiving AI misclassifications?与易出错的人工智能交互:在收到人工智能错误分类时,不信任是否有帮助?
Front Psychol. 2025 May 27;16:1574809. doi: 10.3389/fpsyg.2025.1574809. eCollection 2025.
5
Large Language Models for Pre-mediation Counseling in Medical Disputes: A Comparative Evaluation against Human Experts.用于医疗纠纷调解前咨询的大语言模型:与人类专家的比较评估
Healthc Inform Res. 2025 Apr;31(2):200-208. doi: 10.4258/hir.2025.31.2.200. Epub 2025 Apr 30.
6
Rethinking clinical trials for medical AI with dynamic deployments of adaptive systems.通过自适应系统的动态部署对医学人工智能的临床试验进行重新思考。
NPJ Digit Med. 2025 May 6;8(1):252. doi: 10.1038/s41746-025-01674-3.
7
Simulation-Based Evaluation of Large Language Models for Comorbidity Detection in Sleep Medicine - a Pilot Study on ChatGPT o1 Preview.基于模拟的大语言模型在睡眠医学合并症检测中的评估——关于ChatGPT 01预览版的一项初步研究
Nat Sci Sleep. 2025 Apr 29;17:677-688. doi: 10.2147/NSS.S510254. eCollection 2025.
8
Large language models for pretreatment education in pediatric radiation oncology: A comparative evaluation study.用于儿科放射肿瘤学预处理教育的大语言模型:一项比较评估研究。
Clin Transl Radiat Oncol. 2025 Jan 6;51:100914. doi: 10.1016/j.ctro.2025.100914. eCollection 2025 Mar.
9
Assessment of decision-making with locally run and web-based large language models versus human board recommendations in otorhinolaryngology, head and neck surgery.在耳鼻喉科、头颈外科中,评估本地运行和基于网络的大语言模型与人类委员会建议的决策情况。
Eur Arch Otorhinolaryngol. 2025 Mar;282(3):1593-1607. doi: 10.1007/s00405-024-09153-3. Epub 2025 Jan 10.
10
Can Large Language Models Help Healthcare?大语言模型能助力医疗保健吗?
J Atheroscler Thromb. 2025 May 1;32(5):560-562. doi: 10.5551/jat.ED273. Epub 2024 Nov 26.
基于复杂病例临床案例生成的ChatGPT鉴别诊断列表:诊断准确性评估。
JMIR Med Inform. 2023 Oct 9;11:e48808. doi: 10.2196/48808.
4
User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study.用户使用ChatGPT进行自我诊断及与健康相关目的的意图:横断面调查研究。
JMIR Hum Factors. 2023 May 17;10:e47564. doi: 10.2196/47564.
5
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.比较医生和人工智能聊天机器人对发布在公共社交媒体论坛上的患者问题的回复。
JAMA Intern Med. 2023 Jun 1;183(6):589-596. doi: 10.1001/jamainternmed.2023.1838.
6
A Review of Approaches for Predicting Drug-Drug Interactions Based on Machine Learning.基于机器学习的药物相互作用预测方法综述
Front Pharmacol. 2022 Jan 28;12:814858. doi: 10.3389/fphar.2021.814858. eCollection 2021.
7
Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study.患者对医疗保健中人机交互的看法:实验研究。
J Med Internet Res. 2021 Nov 25;23(11):e25856. doi: 10.2196/25856.
8
Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review.患者和公众对临床人工智能的态度:一项混合方法系统评价。
Lancet Digit Health. 2021 Sep;3(9):e599-e611. doi: 10.1016/S2589-7500(21)00132-1.
9
lab.js: A free, open, online study builder.lab.js:一个免费的、开放的、在线的研究构建器。
Behav Res Methods. 2022 Apr;54(2):556-573. doi: 10.3758/s13428-019-01283-5.
10
Do as AI say: susceptibility in deployment of clinical decision-aids.按照人工智能所说的去做:临床决策辅助工具部署中的易感性。
NPJ Digit Med. 2021 Feb 19;4(1):31. doi: 10.1038/s41746-021-00385-9.