Shanwetter Levit Neta, Saban Mor
School of Health Professions, Gray Faculty of Medical and Health Sciences, Tel Aviv University, Tel Aviv, Israel.
NPJ Digit Med. 2025 Jun 5;8(1):336. doi: 10.1038/s41746-025-01747-3.
Large language models (LLMs) are transforming the landscape of healthcare research, yet their role in qualitative analysis remains underexplored. This study compares human-led and LLM-assisted approaches to analyzing cancer patient narratives, using 33 semi-structured interviews. We conducted three parallel analyses: investigator-led thematic analysis, ChatGPT-4o, and Gemini Advance Pro 1.5. The investigator-led approach identified psychosocial and emotional themes, while the LLMs highlighted structural, temporal, and logistical aspects. LLMs demonstrated efficiency in identifying recurring patterns but struggled with emotional nuance and contextual depth. Investigator-led analysis, while time-intensive, captured the complexities of identity disruption and emotional processing. Our findings suggest that LLMs can serve as complementary tools in qualitative research, enhancing analytical breadth when paired with human interpretation. This study proposes a hybrid model integrating AI-assisted and human-led methods and offers practical recommendations for responsibly incorporating LLMs into qualitative health research.
大型语言模型(LLMs)正在改变医疗保健研究的格局,但其在定性分析中的作用仍未得到充分探索。本研究使用33次半结构化访谈,比较了人工主导和LLM辅助的癌症患者叙事分析方法。我们进行了三项平行分析:研究者主导的主题分析、ChatGPT-4o和Gemini Advance Pro 1.5。研究者主导的方法识别出了心理社会和情感主题,而LLMs则突出了结构、时间和后勤方面。LLMs在识别重复模式方面表现出效率,但在情感细微差别和背景深度方面存在困难。研究者主导的分析虽然耗时,但捕捉到了身份认同破坏和情感处理的复杂性。我们的研究结果表明,LLMs可以作为定性研究中的补充工具,与人工解读相结合时可增强分析广度。本研究提出了一种整合人工智能辅助和人工主导方法的混合模型,并为将LLMs负责任地纳入定性健康研究提供了实用建议。