Bojić Ljubiša, Zagovora Olga, Zelenkauskaite Asta, Vuković Vuk, Čabarkapa Milan, Veseljević Jerković Selma, Jovančević Ana
Institute for Artificial Intelligence Research and Development of Serbia, Fruskogorska, Novi Sad, Serbia.
Institute for Philosophy and Social Theory, Digital Society Lab, University of Belgrade, Kraljice Natalije 45, Belgrade, 11000, Serbia.
Sci Rep. 2025 Apr 3;15(1):11477. doi: 10.1038/s41598-025-96508-3.
In the era of rapid digital communication, vast amounts of textual data are generated daily, demanding efficient methods for latent content analysis to extract meaningful insights. Large Language Models (LLMs) offer potential for automating this process, yet comprehensive assessments comparing their performance to human annotators across multiple dimensions are lacking. This study evaluates the inter-rater reliability, consistency, and quality of seven state-of-the-art LLMs. These include variants of OpenAI's GPT-4, Gemini, Llama-3.1-70B, and Mixtral 8 × 7B. Their performance is compared to human annotators in analyzing sentiment, political leaning, emotional intensity, and sarcasm detection. The study involved 33 human annotators and eight LLM variants assessing 100 curated textual items. This resulted in 3,300 human and 19,200 LLM annotations. LLM performance was also evaluated across three-time points to measure temporal consistency. The results reveal that both humans and most LLMs exhibit high inter-rater reliability in sentiment analysis and political leaning assessments, with LLMs demonstrating higher reliability than humans. In emotional intensity, LLMs displayed higher reliability compared to humans, though humans rated emotional intensity significantly higher. Both groups struggled with sarcasm detection, evidenced by low reliability. Most LLMs showed excellent temporal consistency across all dimensions, indicating stable performance over time. This research concludes that LLMs, especially GPT-4, can effectively replicate human analysis in sentiment and political leaning, although human expertise remains essential for emotional intensity interpretation. The findings demonstrate the potential of LLMs for consistent and high-quality performance in certain areas of latent content analysis.
在快速数字通信的时代,每天都会产生大量的文本数据,这就需要高效的潜在内容分析方法来提取有意义的见解。大语言模型(LLMs)为自动化这一过程提供了潜力,但缺乏在多个维度上比较其与人类注释者性能的全面评估。本研究评估了七种先进的大语言模型的评分者间信度、一致性和质量。其中包括OpenAI的GPT-4、Gemini、Llama-3.1-70B和Mixtral 8×7B的变体。在分析情感、政治倾向、情感强度和讽刺检测方面,将它们的性能与人类注释者进行了比较。该研究涉及33名人类注释者和八个大语言模型变体,评估了100个精心策划的文本项目。这产生了3300个人类注释和19200个大语言模型注释。还在三个时间点评估了大语言模型的性能,以测量时间一致性。结果表明,人类和大多数大语言模型在情感分析和政治倾向评估中都表现出较高的评分者间信度,大语言模型的信度高于人类。在情感强度方面,大语言模型的信度高于人类,尽管人类对情感强度的评分明显更高。两组在讽刺检测方面都存在困难,信度较低就是证明。大多数大语言模型在所有维度上都表现出出色的时间一致性,表明随着时间的推移性能稳定。本研究得出结论,大语言模型尤其是GPT-4,能够在情感和政治倾向方面有效地复制人类分析,尽管人类专业知识对于情感强度的解释仍然至关重要。研究结果证明了大语言模型在潜在内容分析的某些领域实现一致和高质量性能的潜力。