Yue Yongjie, Liu Dong, Lv Yilin, Hao Junyi, Cui Peixuan
School of Journalism and Communication, Tsinghua University, Beijing, China.
School of Journalism and Communication, Renmin University of China, Beijing, China.
J Med Internet Res. 2025 May 14;27:e70122. doi: 10.2196/70122.
Generative large language models (LLMs), such as ChatGPT, have significant potential for qualitative data analysis. This paper aims to provide an early insight into how LLMs can enhance the efficiency of text coding and qualitative analysis, and evaluate their reliability. Using a dataset of semistructured interviews with blind gamers, this study provides a step-by-step tutorial on applying ChatGPT 4-Turbo to the grounded theory approach. The performance of ChatGPT 4-Turbo is evaluated by comparing its coding results with manual coding results assisted by qualitative analysis software. The results revealed that ChatGPT 4-Turbo and manual coding methods exhibited reliability in many aspects. The application of ChatGPT 4-Turbo in grounded theory enhanced the efficiency and diversity of coding and updated the overall grounded theory process. Compared with manual coding, ChatGPT showed shortcomings in depth, context, connections, and coding organization. Limitations and recommendations for applying artificial intelligence in qualitative research were also discussed.
生成式大语言模型(LLMs),如ChatGPT,在定性数据分析方面具有巨大潜力。本文旨在初步探讨大语言模型如何提高文本编码和定性分析的效率,并评估其可靠性。本研究使用了一个对盲人游戏玩家进行半结构化访谈的数据集,提供了一个将ChatGPT 4-Turbo应用于扎根理论方法的分步教程。通过将ChatGPT 4-Turbo的编码结果与定性分析软件辅助的手动编码结果进行比较,评估了ChatGPT 4-Turbo的性能。结果表明,ChatGPT 4-Turbo和手动编码方法在许多方面都表现出可靠性。ChatGPT 4-Turbo在扎根理论中的应用提高了编码的效率和多样性,并更新了整个扎根理论过程。与手动编码相比,ChatGPT在深度、上下文、联系和编码组织方面存在不足。还讨论了在定性研究中应用人工智能的局限性和建议。