Alizadeh Meysam, Kubli Maël, Samei Zeynab, Dehghani Shirin, Zahedivafa Mohammadmasiha, Bermeo Juan D, Korobeynikova Maria, Gilardi Fabrizio
Department of Political Science, University of Zurich, 8050 Zurich, Switzerland.
Department of Computer Science, Institute for Fundamental Research, Tehran, Iran.
J Comput Soc Sci. 2025;8(1):17. doi: 10.1007/s42001-024-00345-9. Epub 2024 Dec 18.
UNLABELLED: This paper studies the performance of open-source Large Language Models (LLMs) in text classification tasks typical for political science research. By examining tasks like stance, topic, and relevance classification, we aim to guide scholars in making informed decisions about their use of LLMs for text analysis and to establish a baseline performance benchmark that demonstrates the models' effectiveness. Specifically, we conduct an assessment of both zero-shot and fine-tuned LLMs across a range of text annotation tasks using news articles and tweets datasets. Our analysis shows that fine-tuning improves the performance of open-source LLMs, allowing them to match or even surpass zero-shot GPT 3.5 and GPT-4, though still lagging behind fine-tuned GPT 3.5. We further establish that fine-tuning is preferable to few-shot training with a relatively modest quantity of annotated text. Our findings show that fine-tuned open-source LLMs can be effectively deployed in a broad spectrum of text annotation applications. We provide a Python notebook facilitating the application of LLMs in text annotation for other researchers. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s42001-024-00345-9.
Front Artif Intell. 2023-11-17
J Am Med Inform Assoc. 2025-3-1
Proc ACM Interact Mob Wearable Ubiquitous Technol. 2024-3
Bioengineering (Basel). 2024-9-29
J Biomed Inform. 2023-9
Proc Natl Acad Sci U S A. 2023-7-25
Nature. 2023-2