School of Communication & Information, Rutgers University, New Brunswick, NJ, United States.
JMIR Form Res. 2024 Oct 30;8:e60939. doi: 10.2196/60939.
BACKGROUND: In the digital age, large language models (LLMs) like ChatGPT have emerged as important sources of health care information. Their interactive capabilities offer promise for enhancing health access, particularly for groups facing traditional barriers such as insurance and language constraints. Despite their growing public health use, with millions of medical queries processed weekly, the quality of LLM-provided information remains inconsistent. Previous studies have predominantly assessed ChatGPT's English responses, overlooking the needs of non-English speakers in the United States. This study addresses this gap by evaluating the quality and linguistic parity of vaccination information from ChatGPT and the Centers for Disease Control and Prevention (CDC), emphasizing health equity. OBJECTIVE: This study aims to assess the quality and language equity of vaccination information provided by ChatGPT and the CDC in English and Spanish. It highlights the critical need for cross-language evaluation to ensure equitable health information access for all linguistic groups. METHODS: We conducted a comparative analysis of ChatGPT's and CDC's responses to frequently asked vaccination-related questions in both languages. The evaluation encompassed quantitative and qualitative assessments of accuracy, readability, and understandability. Accuracy was gauged by the perceived level of misinformation; readability, by the Flesch-Kincaid grade level and readability score; and understandability, by items from the National Institutes of Health's Patient Education Materials Assessment Tool (PEMAT) instrument. RESULTS: The study found that both ChatGPT and CDC provided mostly accurate and understandable (eg, scores over 95 out of 100) responses. However, Flesch-Kincaid grade levels often exceeded the American Medical Association's recommended levels, particularly in English (eg, average grade level in English for ChatGPT=12.84, Spanish=7.93, recommended=6). CDC responses outperformed ChatGPT in readability across both languages. Notably, some Spanish responses appeared to be direct translations from English, leading to unnatural phrasing. The findings underscore the potential and challenges of using ChatGPT for health care access. CONCLUSIONS: ChatGPT holds potential as a health information resource but requires improvements in readability and linguistic equity to be truly effective for diverse populations. Crucially, the default user experience with ChatGPT, typically encountered by those without advanced language and prompting skills, can significantly shape health perceptions. This is vital from a public health standpoint, as the majority of users will interact with LLMs in their most accessible form. Ensuring that default responses are accurate, understandable, and equitable is imperative for fostering informed health decisions across diverse communities.
背景:在数字时代,像 ChatGPT 这样的大型语言模型已成为医疗信息的重要来源。它们的交互功能有望改善健康服务的可及性,特别是对那些面临传统障碍(如保险和语言限制)的群体。尽管大型语言模型在公共卫生领域的应用日益广泛,每周处理数百万次医疗查询,但它们提供的信息质量仍不一致。之前的研究主要评估了 ChatGPT 的英文回复,而忽略了美国非英语使用者的需求。本研究通过评估 ChatGPT 和疾病控制与预防中心(CDC)提供的疫苗接种信息的质量和语言公平性来弥补这一空白,重点关注健康公平。
目的:本研究旨在评估 ChatGPT 和 CDC 以英文和西班牙文提供的疫苗接种信息的质量和语言公平性。它强调了进行跨语言评估的重要性,以确保所有语言群体都能公平地获取健康信息。
方法:我们对 ChatGPT 和 CDC 对两种语言中常见的疫苗接种相关问题的回复进行了比较分析。评估包括准确性、可读性和可理解性的定量和定性评估。准确性通过感知的错误信息水平来衡量;可读性通过弗莱什-金凯德年级水平和可读性得分来衡量;可理解性通过国家卫生研究院患者教育材料评估工具(PEMAT)的项目来衡量。
结果:研究发现,ChatGPT 和 CDC 提供的回复大多准确且易于理解(例如,得分超过 100 分中的 95 分)。然而,弗莱什-金凯德年级水平通常高于美国医学协会推荐的水平,特别是在英文中(例如,ChatGPT 英文平均年级水平=12.84,西班牙文=7.93,推荐=6)。CDC 在两种语言中的可读性方面均优于 ChatGPT。值得注意的是,一些西班牙文回复似乎是从英文直接翻译而来的,导致措辞不自然。这些发现突显了使用 ChatGPT 促进医疗保健可及性的潜力和挑战。
结论:ChatGPT 作为健康信息资源具有潜力,但为了真正为不同人群服务,还需要提高可读性和语言公平性。至关重要的是,大多数没有高级语言和提示技能的用户通常会遇到 ChatGPT 的默认用户体验,这会显著影响他们的健康认知。从公共卫生的角度来看,这一点至关重要,因为大多数用户将以最容易访问的形式与大型语言模型进行交互。确保默认回复准确、易于理解且公平是促进不同社区做出明智健康决策的关键。
J Med Internet Res. 2024-8-14
Otolaryngol Head Neck Surg. 2024-12
Am J Ophthalmol. 2024-9
Br J Ophthalmol. 2024-9-20
AMIA Jt Summits Transl Sci Proc. 2024-5-31
JNCI Cancer Spectr. 2023-3-1