• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用户将DeepSeek用于医疗保健目的的意图及其对大语言模型的信任:跨国调查研究

User Intent to Use DeepSeek for Health Care Purposes and Their Trust in the Large Language Model: Multinational Survey Study.

作者信息

Choudhury Avishek, Shahsavar Yeganeh, Shamszare Hamid

机构信息

Industrial and Management Systems Engineering, Benjamin M Statler College of Engineering and Mineral Resources, West Virginia University, 1306 Evansdale Drive, 321 Engineering Sciences Building, Morgantown, WV, 26506, United States, 1 3042934970.

出版信息

JMIR Hum Factors. 2025 May 26;12:e72867. doi: 10.2196/72867.

DOI:10.2196/72867
PMID:40418796
Abstract

BACKGROUND

Generative artificial intelligence (AI)-particularly large language models (LLMs)-has generated unprecedented interest in applications ranging from everyday questions and answers to health-related inquiries. However, little is known about how everyday users decide whether to trust and adopt these technologies in high-stakes contexts such as personal health.

OBJECTIVES

This study examines how ease of use, perceived usefulness, and risk perception interact to shape user trust in and intentions to adopt DeepSeek, an emerging LLM-based platform, for health care purposes.

METHODS

We adapted survey items from validated technology acceptance scales to assess user perception of DeepSeek. A 12-item Likert scale questionnaire was developed and pilot-tested (n=20). It was then distributed on the web to users in India, the United Kingdom, and the United States who had used DeepSeek within the past 2 weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling. The model assessed direct and indirect effects, including potential quadratic relationships.

RESULTS

A total of 556 complete responses were collected, with respondents almost evenly split across India (n=184), the United Kingdom (n=185), and the United States (n=187). Regarding AI in health care, when asked whether they were comfortable with their health care provider using AI tools, 59.3% (n=330) were fine with AI use provided their doctor verified its output, and 31.5% (n=175) were enthusiastic about its use without conditions. DeepSeek was used primarily for academic and educational purposes, 50.7% (n=282) used DeepSeek as a search engine, and 47.7% (n=265) used it for health-related queries. When asked about their intent to adopt DeepSeek over other LLMs such as ChatGPT, 52.1% (n=290) were likely to switch, and 28.9% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role; ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant nonlinear paths were observed for ease of use and risk, indicating threshold or plateau effects.

CONCLUSIONS

Users are receptive to DeepSeek when it is easy to use, useful, and trustworthy. The model highlights trust as a mediator and shows nonlinear dynamics shaping AI-driven health care tool adoption. Expanding the model with mediators such as privacy and cultural differences could provide deeper insights. Longitudinal experimental designs could establish causality. Further investigation into threshold and plateau phenomena could refine our understanding of user perceptions as they become more familiar with AI-driven health care tools.

摘要

背景

生成式人工智能(AI),尤其是大语言模型(LLM),在从日常问答到健康相关咨询等广泛应用领域引发了前所未有的关注。然而,对于普通用户在诸如个人健康等高风险情境中如何决定是否信任并采用这些技术,我们知之甚少。

目的

本研究探讨易用性、感知有用性和风险感知如何相互作用,以塑造用户对新兴的基于大语言模型的平台渊思(DeepSeek)在医疗保健目的方面的信任及采用意愿。

方法

我们改编了经过验证的技术接受量表中的调查项目,以评估用户对渊思的感知。编制了一份包含12个条目的李克特量表问卷,并进行了预测试(n = 20)。然后在网上分发给过去两周内使用过渊思的印度、英国和美国用户。数据分析包括描述性频率评估和偏最小二乘结构方程建模。该模型评估了直接和间接效应,包括潜在的二次关系。

结果

共收集到556份完整回复,受访者在印度(n = 184)、英国(n = 185)和美国(n = 187)之间分布几乎均匀。关于医疗保健中的人工智能,当被问及是否愿意让其医疗保健提供者使用人工智能工具时,59.3%(n = 330)的人表示只要医生核实其输出结果就可以接受人工智能的使用,31.5%(n = 175)的人无条件地热衷于使用它。渊思主要用于学术和教育目的,50.7%(n = 282)的人将渊思用作搜索引擎,47.7%(n = 265)的人用它来查询与健康相关的问题。当被问及与ChatGPT等其他大语言模型相比,他们采用渊思的意愿时,52.1%(n = 290)的人可能会切换,28.9%(n = 161)的人非常可能会这样做。研究表明,信任起着关键的中介作用;易用性通过信任对使用意愿产生显著的间接影响。同时,感知有用性有助于信任的发展和直接采用。相比之下,风险感知对使用意愿有负面影响,强调了强大的数据治理和透明度的重要性。观察到易用性和风险存在显著的非线性路径,表明存在阈值或平台效应。

结论

当渊思易于使用、有用且值得信赖时,用户愿意接受它。该模型突出了信任作为中介的作用,并显示了塑造人工智能驱动的医疗保健工具采用的非线性动态。用隐私和文化差异等中介因素扩展该模型可以提供更深入的见解。纵向实验设计可以确定因果关系。对阈值和平台现象的进一步研究可以完善我们对用户随着对人工智能驱动的医疗保健工具越来越熟悉而产生的感知的理解。

相似文献

1
User Intent to Use DeepSeek for Health Care Purposes and Their Trust in the Large Language Model: Multinational Survey Study.用户将DeepSeek用于医疗保健目的的意图及其对大语言模型的信任:跨国调查研究
JMIR Hum Factors. 2025 May 26;12:e72867. doi: 10.2196/72867.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
Sexual Harassment and Prevention Training性骚扰与预防培训
4
Impact of AI-Assisted Diagnosis on American Patients' Trust in and Intention to Seek Help From Health Care Professionals: Randomized, Web-Based Survey Experiment.人工智能辅助诊断对美国患者对医疗保健专业人员的信任及寻求帮助意愿的影响:基于网络的随机调查实验。
J Med Internet Res. 2025 Jun 18;27:e66083. doi: 10.2196/66083.
5
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.在基层医疗机构或医院门诊环境中,如果患者出现以下症状和体征,可判断其是否患有 COVID-19。
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.
6
A New Measure of Quantified Social Health Is Associated With Levels of Discomfort, Capability, and Mental and General Health Among Patients Seeking Musculoskeletal Specialty Care.一种新的量化社会健康指标与寻求肌肉骨骼专科护理的患者的不适程度、能力以及心理和总体健康水平相关。
Clin Orthop Relat Res. 2025 Apr 1;483(4):647-663. doi: 10.1097/CORR.0000000000003394. Epub 2025 Feb 5.
7
How lived experiences of illness trajectories, burdens of treatment, and social inequalities shape service user and caregiver participation in health and social care: a theory-informed qualitative evidence synthesis.疾病轨迹的生活经历、治疗负担和社会不平等如何影响服务使用者和照顾者参与健康和社会护理:一项基于理论的定性证据综合分析
Health Soc Care Deliv Res. 2025 Jun;13(24):1-120. doi: 10.3310/HGTQ8159.
8
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.两种现代生存预测工具 SORG-MLA 和 METSSS 在接受手术联合放疗和单纯放疗治疗有症状长骨转移患者中的比较。
Clin Orthop Relat Res. 2024 Dec 1;482(12):2193-2208. doi: 10.1097/CORR.0000000000003185. Epub 2024 Jul 23.
9
Patient buy-in to social prescribing through link workers as part of person-centred care: a realist evaluation.患者通过联络人员接受社会处方作为以患者为中心的护理的一部分:一项现实主义评价。
Health Soc Care Deliv Res. 2024 Sep 25:1-17. doi: 10.3310/ETND8254.
10
Adapting Safety Plans for Autistic Adults with Involvement from the Autism Community.在自闭症群体的参与下为成年自闭症患者调整安全计划。
Autism Adulthood. 2025 May 28;7(3):293-302. doi: 10.1089/aut.2023.0124. eCollection 2025 Jun.

本文引用的文献

1
Perceptions and attitudes of nurse practitioners toward artificial intelligence adoption in health care.执业护士对医疗保健领域采用人工智能的认知与态度。
Health Sci Rep. 2024 Aug 21;7(8):e70006. doi: 10.1002/hsr2.70006. eCollection 2024 Aug.
2
Triage Performance Across Large Language Models, ChatGPT, and Untrained Doctors in Emergency Medicine: Comparative Study.分诊表现比较:大型语言模型、ChatGPT 和未经训练的急诊医生:一项对比研究。
J Med Internet Res. 2024 Jun 14;26:e53297. doi: 10.2196/53297.
3
Is ChatGPT an Accurate and Reliable Source of Information for Patients with Vaccine and Statin Hesitancy?
ChatGPT 对于对疫苗和他汀类药物存在犹豫的患者而言,是准确且可靠的信息来源吗?
Medeni Med J. 2024 Mar 21;39(1):1-7. doi: 10.4274/MMJ.galenos.2024.03154.
4
Health Care Trainees' and Professionals' Perceptions of ChatGPT in Improving Medical Knowledge Training: Rapid Survey Study.医疗保健受训者和专业人员对 ChatGPT 在改善医学知识培训方面的看法:快速调查研究。
J Med Internet Res. 2023 Oct 18;25:e49385. doi: 10.2196/49385.
5
Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare.选择人类医生还是 AI 医生?比较信任关联和知识如何与 AI 在医疗保健中的风险和收益感知相关。
Risk Anal. 2024 Apr;44(4):939-957. doi: 10.1111/risa.14216. Epub 2023 Sep 18.
6
Implementing AI in healthcare-the relevance of trust: a scoping review.在医疗保健中实施人工智能——信任的相关性:一项范围综述
Front Health Serv. 2023 Aug 24;3:1211150. doi: 10.3389/frhs.2023.1211150. eCollection 2023.
7
Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer.评估人工智能聊天机器人对癌症热门搜索查询的响应
JAMA Oncol. 2023 Oct 1;9(10):1437-1440. doi: 10.1001/jamaoncol.2023.2947.
8
High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content.ChatGPT生成的医学内容中虚假和不准确参考文献的高比例。
Cureus. 2023 May 19;15(5):e39238. doi: 10.7759/cureus.39238. eCollection 2023 May.
9
Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis.研究用户信任对 ChatGPT 采用和使用的影响:调查分析。
J Med Internet Res. 2023 Jun 14;25:e47184. doi: 10.2196/47184.
10
User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study.用户使用ChatGPT进行自我诊断及与健康相关目的的意图:横断面调查研究。
JMIR Hum Factors. 2023 May 17;10:e47564. doi: 10.2196/47564.