Suppr超能文献

用户将DeepSeek用于医疗保健目的的意图及其对大语言模型的信任:跨国调查研究

User Intent to Use DeepSeek for Health Care Purposes and Their Trust in the Large Language Model: Multinational Survey Study.

作者信息

Choudhury Avishek, Shahsavar Yeganeh, Shamszare Hamid

机构信息

Industrial and Management Systems Engineering, Benjamin M Statler College of Engineering and Mineral Resources, West Virginia University, 1306 Evansdale Drive, 321 Engineering Sciences Building, Morgantown, WV, 26506, United States, 1 3042934970.

出版信息

JMIR Hum Factors. 2025 May 26;12:e72867. doi: 10.2196/72867.

Abstract

BACKGROUND

Generative artificial intelligence (AI)-particularly large language models (LLMs)-has generated unprecedented interest in applications ranging from everyday questions and answers to health-related inquiries. However, little is known about how everyday users decide whether to trust and adopt these technologies in high-stakes contexts such as personal health.

OBJECTIVES

This study examines how ease of use, perceived usefulness, and risk perception interact to shape user trust in and intentions to adopt DeepSeek, an emerging LLM-based platform, for health care purposes.

METHODS

We adapted survey items from validated technology acceptance scales to assess user perception of DeepSeek. A 12-item Likert scale questionnaire was developed and pilot-tested (n=20). It was then distributed on the web to users in India, the United Kingdom, and the United States who had used DeepSeek within the past 2 weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling. The model assessed direct and indirect effects, including potential quadratic relationships.

RESULTS

A total of 556 complete responses were collected, with respondents almost evenly split across India (n=184), the United Kingdom (n=185), and the United States (n=187). Regarding AI in health care, when asked whether they were comfortable with their health care provider using AI tools, 59.3% (n=330) were fine with AI use provided their doctor verified its output, and 31.5% (n=175) were enthusiastic about its use without conditions. DeepSeek was used primarily for academic and educational purposes, 50.7% (n=282) used DeepSeek as a search engine, and 47.7% (n=265) used it for health-related queries. When asked about their intent to adopt DeepSeek over other LLMs such as ChatGPT, 52.1% (n=290) were likely to switch, and 28.9% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role; ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant nonlinear paths were observed for ease of use and risk, indicating threshold or plateau effects.

CONCLUSIONS

Users are receptive to DeepSeek when it is easy to use, useful, and trustworthy. The model highlights trust as a mediator and shows nonlinear dynamics shaping AI-driven health care tool adoption. Expanding the model with mediators such as privacy and cultural differences could provide deeper insights. Longitudinal experimental designs could establish causality. Further investigation into threshold and plateau phenomena could refine our understanding of user perceptions as they become more familiar with AI-driven health care tools.

摘要

背景

生成式人工智能(AI),尤其是大语言模型(LLM),在从日常问答到健康相关咨询等广泛应用领域引发了前所未有的关注。然而,对于普通用户在诸如个人健康等高风险情境中如何决定是否信任并采用这些技术,我们知之甚少。

目的

本研究探讨易用性、感知有用性和风险感知如何相互作用,以塑造用户对新兴的基于大语言模型的平台渊思(DeepSeek)在医疗保健目的方面的信任及采用意愿。

方法

我们改编了经过验证的技术接受量表中的调查项目,以评估用户对渊思的感知。编制了一份包含12个条目的李克特量表问卷,并进行了预测试(n = 20)。然后在网上分发给过去两周内使用过渊思的印度、英国和美国用户。数据分析包括描述性频率评估和偏最小二乘结构方程建模。该模型评估了直接和间接效应,包括潜在的二次关系。

结果

共收集到556份完整回复,受访者在印度(n = 184)、英国(n = 185)和美国(n = 187)之间分布几乎均匀。关于医疗保健中的人工智能,当被问及是否愿意让其医疗保健提供者使用人工智能工具时,59.3%(n = 330)的人表示只要医生核实其输出结果就可以接受人工智能的使用,31.5%(n = 175)的人无条件地热衷于使用它。渊思主要用于学术和教育目的,50.7%(n = 282)的人将渊思用作搜索引擎,47.7%(n = 265)的人用它来查询与健康相关的问题。当被问及与ChatGPT等其他大语言模型相比,他们采用渊思的意愿时,52.1%(n = 290)的人可能会切换,28.9%(n = 161)的人非常可能会这样做。研究表明,信任起着关键的中介作用;易用性通过信任对使用意愿产生显著的间接影响。同时,感知有用性有助于信任的发展和直接采用。相比之下,风险感知对使用意愿有负面影响,强调了强大的数据治理和透明度的重要性。观察到易用性和风险存在显著的非线性路径,表明存在阈值或平台效应。

结论

当渊思易于使用、有用且值得信赖时,用户愿意接受它。该模型突出了信任作为中介的作用,并显示了塑造人工智能驱动的医疗保健工具采用的非线性动态。用隐私和文化差异等中介因素扩展该模型可以提供更深入的见解。纵向实验设计可以确定因果关系。对阈值和平台现象的进一步研究可以完善我们对用户随着对人工智能驱动的医疗保健工具越来越熟悉而产生的感知的理解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验