Bautista John Robert, Herbert Drew, Farmer Matthew, De Torres Ryan Q, Soriano Gil P, Ronquillo Charlene E
Sinclair School of Nursing, University of Missouri-Columbia, Columbia, Missouri, United States.
Institute for Data Science and Informatics, University of Missouri-Columbia, Columbia, Missouri, United States.
Appl Clin Inform. 2025 Aug;16(4):892-902. doi: 10.1055/a-2647-1210. Epub 2025 Jul 2.
Health consumers can use generative artificial intelligence (GenAI) chatbots to seek health information. As GenAI chatbots continue to improve and be adopted, it is crucial to examine how health information generated by such tools is used and perceived by health consumers.To conduct a scoping review of health consumers' use and perceptions of health information from GenAI chatbots.Arksey and O'Malley's five-step protocol was used to guide the scoping review. Following PRISMA guidelines, relevant empirical papers published on or after January 1, 2019, were retrieved between February and July 2024. Thematic and content analyses were performed.We retrieved 3,840 titles and reviewed 12 papers that included 13 studies (quantitative = 5, qualitative = 4, and mixed = 4). ChatGPT was used in 11 studies, while two studies used GPT-3. Most were conducted in the United States ( = 4). The studies involve general and specific (e.g., medical imaging, psychological health, and vaccination) health topics. One study explicitly used a theory. Eight studies were rated with excellent quality. Studies were categorized as user experience studies ( = 4), consumer surveys ( = 1), and evaluation studies ( = 8). Five studies examined health consumers' use of health information from GenAI chatbots. Perceptions focused on: (1) accuracy, reliability, or quality; (2) readability; (3) trust or trustworthiness; (4) privacy, confidentiality, security, or safety; (5) usefulness; (6) accessibility; (7) emotional appeal; (8) attitude; and (9) effectiveness.Although health consumers can use GenAI chatbots to obtain accessible, readable, and useful health information, negative perceptions of their accuracy, trustworthiness, effectiveness, and safety serve as barriers that must be addressed to mitigate health-related risks, improve health beliefs, and achieve positive health outcomes. More theory-based studies are needed to better understand how exposure to health information from GenAI chatbots affects health beliefs and outcomes.
健康消费者可以使用生成式人工智能(GenAI)聊天机器人来获取健康信息。随着GenAI聊天机器人不断改进并得到应用,审视此类工具生成的健康信息如何被健康消费者使用和认知至关重要。为了对健康消费者使用GenAI聊天机器人获取健康信息的情况及认知进行范围综述。采用了阿克西和奥马利的五步方案来指导范围综述。遵循PRISMA指南,于2024年2月至7月检索了2019年1月1日及以后发表的相关实证论文。进行了主题分析和内容分析。我们检索到3840个标题,并审查了12篇论文,其中包括13项研究(定量研究 = 5项,定性研究 = 4项,混合研究 = 4项)。11项研究使用了ChatGPT,两项研究使用了GPT - 3。大多数研究在美国进行( = 4项)。这些研究涉及一般和特定(如医学成像、心理健康和疫苗接种)的健康主题。一项研究明确使用了一种理论。八项研究质量评级为优秀。研究分为用户体验研究( = 4项)、消费者调查( = 1项)和评估研究( = 8项)。五项研究考察了健康消费者对GenAI聊天机器人生成的健康信息的使用情况。认知主要集中在:(1)准确性、可靠性或质量;(2)可读性;(3)信任或可信度;(4)隐私、保密性、安全性或安全性;(5)有用性;(6)可及性;(7)情感吸引力;(8)态度;(9)有效性。尽管健康消费者可以使用GenAI聊天机器人获取可及、可读且有用的健康信息,但对其准确性、可信度、有效性和安全性的负面认知成为了必须解决的障碍,以减轻与健康相关的风险、改善健康观念并实现积极的健康结果。需要更多基于理论的研究来更好地理解接触GenAI聊天机器人生成的健康信息如何影响健康观念和结果。