Department of Psychology, New York University, New York, NY 10003.
Department of Psychology, Princeton University, Princeton, NJ 08540.
Proc Natl Acad Sci U S A. 2024 Aug 20;121(34):e2308950121. doi: 10.1073/pnas.2308950121. Epub 2024 Aug 12.
The social and behavioral sciences have been increasingly using automated text analysis to measure psychological constructs in text. We explore whether GPT, the large-language model (LLM) underlying the AI chatbot ChatGPT, can be used as a tool for automated psychological text analysis in several languages. Across 15 datasets ( = 47,925 manually annotated tweets and news headlines), we tested whether different versions of GPT (3.5 Turbo, 4, and 4 Turbo) can accurately detect psychological constructs (sentiment, discrete emotions, offensiveness, and moral foundations) across 12 languages. We found that GPT ( = 0.59 to 0.77) performed much better than English-language dictionary analysis ( = 0.20 to 0.30) at detecting psychological constructs as judged by manual annotators. GPT performed nearly as well as, and sometimes better than, several top-performing fine-tuned machine learning models. Moreover, GPT's performance improved across successive versions of the model, particularly for lesser-spoken languages, and became less expensive. Overall, GPT may be superior to many existing methods of automated text analysis, since it achieves relatively high accuracy across many languages, requires no training data, and is easy to use with simple prompts (e.g., "is this text negative?") and little coding experience. We provide sample code and a video tutorial for analyzing text with the GPT application programming interface. We argue that GPT and other LLMs help democratize automated text analysis by making advanced natural language processing capabilities more accessible, and may help facilitate more cross-linguistic research with understudied languages.
社会和行为科学越来越多地使用自动化文本分析来衡量文本中的心理结构。我们探讨了人工智能聊天机器人 ChatGPT 所基于的大型语言模型(LLM)GPT 是否可以在多种语言中用作自动化心理文本分析的工具。在 15 个数据集(=47925 条手动注释的推文和新闻标题)中,我们测试了不同版本的 GPT(3.5Turbo、4 和 4Turbo)是否能够在 12 种语言中准确检测心理结构(情感、离散情绪、冒犯性和道德基础)。我们发现,GPT(=0.59 到 0.77)在检测心理结构方面的表现明显优于英语词典分析(=0.20 到 0.30),这是由手动注释者判断的。GPT 的表现与一些表现最好的微调机器学习模型相当,有时甚至更好。此外,GPT 的性能在模型的连续版本中有所提高,特别是对于使用较少的语言,而且成本更低。总的来说,GPT 可能优于许多现有的自动化文本分析方法,因为它在许多语言中都能达到相对较高的准确性,不需要训练数据,并且使用简单的提示(例如,“这段文字是否消极?”)和很少的编码经验就很容易使用。我们提供了使用 GPT 应用程序编程接口分析文本的示例代码和视频教程。我们认为,GPT 和其他 LLM 通过使更先进的自然语言处理功能更容易获得,帮助实现了自动化文本分析的民主化,并可能有助于促进对使用较少语言的更跨语言研究。