Suppr超能文献

大语言模型能否帮助预测复杂行为科学研究的结果?

Can large language models help predict results from a complex behavioural science study?

作者信息

Lippert Steffen, Dreber Anna, Johannesson Magnus, Tierney Warren, Cyrus-Lai Wilson, Uhlmann Eric Luis, Pfeiffer Thomas

机构信息

Department of Economics, University of Auckland, Auckland, New Zealand.

Department of Economics, Stockholm School of Economics, Stockholm, Sweden.

出版信息

R Soc Open Sci. 2024 Sep 25;11(9):240682. doi: 10.1098/rsos.240682. eCollection 2024 Sep.

Abstract

We tested whether large language models (LLMs) can help predict results from a complex behavioural science experiment. In study 1, we investigated the performance of the widely used LLMs GPT-3.5 and GPT-4 in forecasting the empirical findings of a large-scale experimental study of emotions, gender, and social perceptions. We found that GPT-4, but not GPT-3.5, matched the performance of a cohort of 119 human experts, with correlations of 0.89 (GPT-4), 0.07 (GPT-3.5) and 0.87 (human experts) between aggregated forecasts and realized effect sizes. In study 2, providing participants from a university subject pool the opportunity to query a GPT-4 powered chatbot significantly increased the accuracy of their forecasts. Results indicate promise for artificial intelligence (AI) to help anticipate-at scale and minimal cost-which claims about human behaviour will find empirical support and which ones will not. Our discussion focuses on avenues for human-AI collaboration in science.

摘要

我们测试了大语言模型(LLMs)是否有助于预测一项复杂行为科学实验的结果。在研究1中,我们调查了广泛使用的大语言模型GPT-3.5和GPT-4在预测一项关于情绪、性别和社会认知的大规模实验研究的实证结果方面的表现。我们发现,GPT-4而非GPT-3.5的表现与119名人类专家相当,汇总预测与实际效应大小之间的相关性分别为0.89(GPT-4)、0.07(GPT-3.5)和0.87(人类专家)。在研究2中,为大学受试者库中的参与者提供查询由GPT-4驱动的聊天机器人的机会,显著提高了他们预测的准确性。结果表明,人工智能有望以大规模且低成本的方式帮助预测哪些关于人类行为的说法将获得实证支持,哪些不会。我们的讨论重点是科学领域中人类与人工智能合作的途径。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验