Kristensen-McLachlan Ross Deans, Canavan Miceal, Kárdos Marton, Jacobsen Mia, Aarøe Lene
Department of Linguistics, Cognitive Science, and Semiotics, Aarhus University, Aarhus 8000, Denmark.
Center for Humanities Computing, Aarhus University, Aarhus 8000, Denmark.
PNAS Nexus. 2025 Apr 1;4(4):pgaf069. doi: 10.1093/pnasnexus/pgaf069. eCollection 2025 Apr.
Recent research highlights the significant potential of ChatGPT for text annotation in social science research. However, ChatGPT is a closed-source product, which has major drawbacks with regards to transparency, reproducibility, cost, and data protection. Recent advances in open-source (OS) large language models (LLMs) offer an alternative without these drawbacks. Thus, it is important to evaluate the performance of OS LLMs relative to ChatGPT and standard approaches to supervised machine learning classification. We conduct a systematic comparative evaluation of the performance of a range of OS LLMs alongside ChatGPT, using both zero- and few-shot learning as well as generic and custom prompts, with results compared with supervised classification models. Using a new dataset of tweets from US news media and focusing on simple binary text annotation tasks, we find significant variation in the performance of ChatGPT and OS models across the tasks and that the supervised classifier using DistilBERT generally outperforms both. Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science, we advise caution when using ChatGPT for substantive text annotation tasks.
近期研究凸显了ChatGPT在社会科学研究文本标注方面的巨大潜力。然而,ChatGPT是一个闭源产品,在透明度、可重复性、成本和数据保护方面存在重大缺陷。开源(OS)大语言模型(LLMs)的最新进展提供了一种没有这些缺陷的替代方案。因此,评估开源大语言模型相对于ChatGPT以及监督式机器学习分类标准方法的性能非常重要。我们对一系列开源大语言模型与ChatGPT的性能进行了系统的比较评估,使用了零样本和少样本学习以及通用和自定义提示,并将结果与监督分类模型进行比较。通过使用一个来自美国新闻媒体推文的新数据集,并专注于简单的二元文本标注任务,我们发现在各项任务中ChatGPT和开源模型的性能存在显著差异,并且使用DistilBERT的监督分类器通常表现优于两者。鉴于ChatGPT性能不可靠以及它给开放科学带来的重大挑战,我们建议在将ChatGPT用于实质性文本标注任务时要谨慎。