Dvorak Fabian, Stumpf Regina, Fehrler Sebastian, Fischbacher Urs
Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Konstanz 78464, Germany.
Department of Environmental Social Sciences, Eawag, Dübendorf 8600, Switzerland.
PNAS Nexus. 2025 Apr 7;4(4):pgaf112. doi: 10.1093/pnasnexus/pgaf112. eCollection 2025 Apr.
Large language models (LLMs) are poised to reshape the way individuals communicate and interact. While this form of AI has the potential to efficiently make many human decisions, there is limited understanding of how individuals will respond to its use in social interactions. In particular, it remains unclear how individuals interact with LLMs when the interaction has consequences for other people. Here, we report the results of a large-scale, preregistered online experiment ( ) showing that human players' fairness, trust, trustworthiness, cooperation, and coordination in economic two-player games decrease when the decision of the interaction partner is taken over by ChatGPT. On the contrary, we observe no adverse reactions when individuals are uncertain whether they are interacting with a human or a LLM. At the same time, participants often delegate decisions to the LLM, especially when the model's involvement is not disclosed, and individuals have difficulty distinguishing between decisions made by humans and those made by AI.
大语言模型(LLMs)有望重塑人们沟通和互动的方式。虽然这种形式的人工智能有潜力高效地做出许多人类决策,但对于人们在社交互动中如何回应其使用,了解有限。特别是,当互动对他人有影响时,人们如何与大语言模型互动仍不清楚。在此,我们报告一项大规模、预先注册的在线实验的结果,该实验表明,在经济双人游戏中,当互动伙伴的决策由ChatGPT接管时,人类玩家的公平性、信任、可信度、合作和协调会降低。相反,当人们不确定自己是在与人类还是大语言模型互动时,我们没有观察到不良反应。同时,参与者经常将决策委托给大语言模型,尤其是在模型的参与未被披露时,而且人们难以区分人类做出的决策和人工智能做出的决策。