Guo Shuangyan, Song Yang, Chen Guanyun, Han Hongxin, Wu Hong, Ma Jingdong
School of Medicine and Health Management, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
Digit Health. 2025 Aug 28;11:20552076251374121. doi: 10.1177/20552076251374121. eCollection 2025 Jan-Dec.
As a representative product of generative artificial intelligence (GenAI), ChatGPT demonstrates significant potential to enhance healthcare outcomes and improve the quality of life for healthcare consumers. However, current research has not yet quantitatively analysed trust-related issues from both the healthcare consumer perspective and the uncertainty perspective of human-computer interaction.
This study aims to analyse the antecedents of healthcare consumers' trust in ChatGPT and their adoption intentions towards ChatGPT-generated health information from the perspective of uncertainty reduction.
An anonymous online survey was conducted with healthcare customers in China between September and October 2024. This survey included questions on critical constructs such as social influence, situational normality, anthropomorphism, autonomy, personalisation, information quality, information disclosure, trust in ChatGPT, and intention to adopt health information. A 7-point Likert scale was used to score each item, ranging from 1 (strongly disagree) to 7 (strongly agree). SmartPLS 4.0 was used to analyse data and test the proposed theoretical model.
The findings indicated that trust in ChatGPT had a significant relationship with the intention to adopt health information. The primary factors associated with trust in ChatGPT and the intention to adopt health information were social influence, situational normality, autonomy, personalisation, and information quality. The analysis revealed a negative relationship between social influence and trust in ChatGPT. Familiarity with ChatGPT was identified as a significant control variable.
Trust in ChatGPT is positively related to healthcare consumers' adoption of health information, with information quality as a key predictor. The findings offer empirical support and practical guidance for enhancing trust and encouraging the use of GenAI-generated health information.
作为生成式人工智能(GenAI)的代表性产品,ChatGPT在改善医疗保健成果和提高医疗保健消费者生活质量方面展现出巨大潜力。然而,目前的研究尚未从医疗保健消费者视角以及人机交互的不确定性视角对信任相关问题进行定量分析。
本研究旨在从不确定性降低的角度分析医疗保健消费者对ChatGPT信任的前因及其对ChatGPT生成的健康信息的采纳意愿。
2024年9月至10月期间,对中国的医疗保健客户进行了一项匿名在线调查。该调查包括有关社会影响、情境常态、拟人化、自主性、个性化、信息质量、信息披露、对ChatGPT的信任以及采纳健康信息意愿等关键构念的问题。采用7点李克特量表对每个项目进行评分,范围从1(强烈不同意)到7(强烈同意)。使用SmartPLS 4.0分析数据并检验所提出的理论模型。
研究结果表明,对ChatGPT的信任与采纳健康信息的意愿存在显著关系。与对ChatGPT的信任和采纳健康信息意愿相关的主要因素是社会影响、情境常态、自主性、个性化和信息质量。分析揭示了社会影响与对ChatGPT的信任之间存在负相关关系。对ChatGPT的熟悉程度被确定为一个重要的控制变量。
对ChatGPT的信任与医疗保健消费者对健康信息的采纳呈正相关,其中信息质量是关键预测因素。这些发现为增强信任和鼓励使用GenAI生成的健康信息提供了实证支持和实践指导。