Department of Computer Science, University of York, York, North Yorkshire, United Kingdom.
School of Arts and Creative Technologies, University of York, York, North Yorkshire, United Kingdom.
PLoS One. 2024 May 10;19(5):e0301033. doi: 10.1371/journal.pone.0301033. eCollection 2024.
The development of believable, natural, and interactive digital artificial agents is a field of growing interest. Theoretical uncertainties and technical barriers present considerable challenges to the field, particularly with regards to developing agents that effectively simulate human emotions. Large language models (LLMs) might address these issues by tapping common patterns in situational appraisal. In three empirical experiments, this study tests the capabilities of LLMs to solve emotional intelligence tasks and to simulate emotions. It presents and evaluates a new Chain-of-Emotion architecture for emotion simulation within video games, based on psychological appraisal research. Results show that it outperforms control LLM architectures on a range of user experience and content analysis metrics. This study therefore provides early evidence of how to construct and test affective agents based on cognitive processes represented in language models.
可信、自然、互动的数字人工智能代理的开发是一个日益受到关注的领域。理论上的不确定性和技术障碍给该领域带来了相当大的挑战,特别是在开发能够有效模拟人类情感的代理方面。大型语言模型(LLM)可以通过挖掘情境评价中的常见模式来解决这些问题。在三项实证实验中,本研究测试了 LLM 解决情商任务和模拟情绪的能力。它提出并评估了一种新的基于心理评价研究的视频游戏中的情感模拟链架构。结果表明,它在一系列用户体验和内容分析指标上优于控制 LLM 架构。因此,本研究提供了如何基于语言模型中表示的认知过程构建和测试情感代理的早期证据。