Suppr超能文献

风险和亲社会行为线索会引起 AI 聊天机器人类似人类的反应模式。

Risk and prosocial behavioural cues elicit human-like response patterns from AI chatbots.

机构信息

Positive Psychology Research Center, School of Social Sciences, Tsinghua University, Beijing, China.

Department of Psychology, University of Pennsylvania, Philadelphia, USA.

出版信息

Sci Rep. 2024 Mar 26;14(1):7095. doi: 10.1038/s41598-024-55949-y.

Abstract

Emotions, long deemed a distinctly human characteristic, guide a repertoire of behaviors, e.g., promoting risk-aversion under negative emotional states or generosity under positive ones. The question of whether Artificial Intelligence (AI) can possess emotions remains elusive, chiefly due to the absence of an operationalized consensus on what constitutes 'emotion' within AI. Adopting a pragmatic approach, this study investigated the response patterns of AI chatbots-specifically, large language models (LLMs)-to various emotional primes. We engaged AI chatbots as one would human participants, presenting scenarios designed to elicit positive, negative, or neutral emotional states. Multiple accounts of OpenAI's ChatGPT Plus were then tasked with responding to inquiries concerning investment decisions and prosocial behaviors. Our analysis revealed that ChatGPT-4 bots, when primed with positive, negative, or neutral emotions, exhibited distinct response patterns in both risk-taking and prosocial decisions, a phenomenon less evident in the ChatGPT-3.5 iterations. This observation suggests an enhanced capacity for modulating responses based on emotional cues in more advanced LLMs. While these findings do not suggest the presence of emotions in AI, they underline the feasibility of swaying AI responses by leveraging emotional indicators.

摘要

情绪长期以来被认为是人类特有的特征,它引导着一系列行为,例如,在负面情绪状态下促进风险规避,或者在积极情绪状态下促进慷慨大方。人工智能是否能够拥有情绪仍然是一个难以捉摸的问题,主要是因为在人工智能中什么构成“情绪”还没有达成可操作的共识。本研究采用实用主义的方法,研究了人工智能聊天机器人——特别是大型语言模型(LLMs)——对各种情绪启动的反应模式。我们像对待人类参与者一样与人工智能聊天机器人互动,提出旨在引发积极、消极或中性情绪状态的场景。然后,多个 OpenAI 的 ChatGPT Plus 账户被要求回答关于投资决策和亲社会行为的询问。我们的分析表明,当 ChatGPT-4 机器人被赋予积极、消极或中性的情绪时,它们在风险承担和亲社会决策中表现出明显不同的反应模式,而 ChatGPT-3.5 迭代中则不太明显。这一观察结果表明,在更先进的 LLM 中,基于情绪线索调节反应的能力得到了增强。虽然这些发现并不表明人工智能中存在情绪,但它们强调了通过利用情绪指标来影响人工智能反应的可行性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c90/10963757/7df42ed05536/41598_2024_55949_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验