Baxter Patrick, Li Meng-Hao, Wei Jiaxin, Koizumi Naoru
Schar School of Policy and Government, George Mason University, 3351 Fairfax Dr, Arlington, VA, 22201, United States, 1 (703) 993-8999.
JMIR Infodemiology. 2025 Jun 23;5:e64509. doi: 10.2196/64509.
BACKGROUND: The rapid emergence of artificial intelligence-based large language models (LLMs) in 2022 has initiated extensive discussions within the academic community. While proponents highlight LLMs' potential to improve writing and analytical tasks, critics caution against the ethical and cultural implications of widespread reliance on these models. Existing literature has explored various aspects of LLMs, including their integration, performance, and utility, yet there is a gap in understanding the nature of these discussions and how public perception contrasts with expert opinion in the field of public health. OBJECTIVE: This study sought to explore how the general public's views and sentiments regarding LLMs, using OpenAI's ChatGPT as an example, differ from those of academic researchers and experts in the field, with the goal of gaining a more comprehensive understanding of the future role of LLMs in health care. METHODS: We used a hybrid sentiment analysis approach, integrating the Syuzhet package in R (R Core Team) with GPT-3.5, achieving an 84% accuracy rate in sentiment classification. Also, structural topic modeling was applied to identify and analyze 8 key discussion topics, capturing both optimistic and critical perspectives on LLMs. RESULTS: Findings revealed a predominantly positive sentiment toward LLM integration in health care, particularly in areas such as patient care and clinical decision-making. However, concerns were raised regarding their suitability for mental health support and patient communication, highlighting potential limitations and ethical challenges. CONCLUSIONS: This study underscores the transformative potential of LLMs in public health while emphasizing the need to address ethical and practical concerns. By comparing public discourse with academic perspectives, our findings contribute to the ongoing scholarly debate on the opportunities and risks associated with LLM adoption in health care.
背景:2022年基于人工智能的大型语言模型(LLMs)迅速出现,引发了学术界的广泛讨论。虽然支持者强调大型语言模型在改进写作和分析任务方面的潜力,但批评者则告诫人们要警惕广泛依赖这些模型所带来的伦理和文化影响。现有文献已经探讨了大型语言模型的各个方面,包括它们的整合、性能和效用,但在理解这些讨论的本质以及公众认知与公共卫生领域专家意见的对比方面仍存在差距。 目的:本研究旨在探讨以OpenAI的ChatGPT为例,公众对大型语言模型的看法和情绪与该领域学术研究人员和专家的看法有何不同,目的是更全面地了解大型语言模型在医疗保健中的未来作用。 方法:我们采用了一种混合情感分析方法,将R语言(R核心团队)中的Syuzhet软件包与GPT-3.5相结合,在情感分类方面达到了84%的准确率。此外,还应用了结构主题建模来识别和分析8个关键讨论主题,捕捉对大型语言模型的乐观和批判性观点。 结果:研究结果显示,人们对大型语言模型在医疗保健中的整合普遍持积极态度,特别是在患者护理和临床决策等领域。然而,人们对其在心理健康支持和患者沟通方面的适用性表示担忧,凸显了潜在的局限性和伦理挑战。 结论:本研究强调了大型语言模型在公共卫生方面的变革潜力,同时强调需要解决伦理和实际问题。通过将公众话语与学术观点进行比较,我们的研究结果有助于正在进行的关于在医疗保健中采用大型语言模型所带来的机遇和风险的学术辩论。
JMIR Infodemiology. 2025-6-23
J Med Internet Res. 2025-6-19
Cochrane Database Syst Rev. 2022-5-20
J Med Internet Res. 2025-6-9
Cochrane Database Syst Rev. 2022-10-4
J Med Internet Res. 2025-1-23
J Med Internet Res. 2025-2-5
J Am Med Inform Assoc. 2024-9-1
Australas Psychiatry. 2024-6
Health Sci Rep. 2024-2-15
Comput Methods Programs Biomed. 2024-3
J Med Internet Res. 2023-9-15
Front Psychiatry. 2023-8-1