Hong Chuanyang, He Qingyun
School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics, Chengdu, China.
School of Finance and Economics, Anhui Science and Technology University, Bengbu, China.
Front Psychol. 2025 May 7;16:1591618. doi: 10.3389/fpsyg.2025.1591618. eCollection 2025.
The surge in the capabilities of large language models (LLMs) has propelled the development of Artificial General Intelligence (AGI), highlighting generative agents as pivotal components for emulating complex AI behaviors. Given the high costs associated with individually training LLMs for each AI agent, there is a critical need for advanced memory retrieval mechanisms to maintain the unique characteristics and memories of individual AI agents.
In this research, we developed a text-based simulation of a generative agent world, constructing a community with multiple agents and locations in which certain levels of interaction were enabled. Within this framework, we introduced a novel memory retrieval system using an Auxiliary Cross Attention Network (ACAN). This system calculates and ranks attention weights between an agent's current state and stored memories, selecting the most relevant memories for any given situation. In a novel approach, we incorporated LLM assistance, comparing memories retrieved by our model with those extracted using a base method during training, and constructing a novel loss function based on these comparisons to optimize the training process effectively. To our knowledge, this is the first study to utilize LLMs to train a dedicated agent memory retrieval network.
Our empirical evaluations demonstrate that this approach substantially enhances the quality of memory retrieval, thereby increasing the adaptability and behavioral consistency of agents in fluctuating environments.
Our findings not only introduce new perspectives and methodologies for memory retrieval in generative agents but also extend the utility of LLMs in memory management across varied AI agent applications.
大语言模型(LLMs)能力的激增推动了通用人工智能(AGI)的发展,凸显了生成式智能体作为模拟复杂人工智能行为的关键组件。鉴于为每个人工智能智能体单独训练大语言模型成本高昂,迫切需要先进的内存检索机制来维护单个人工智能智能体的独特特征和记忆。
在本研究中,我们开发了一个基于文本的生成式智能体世界模拟,构建了一个具有多个智能体和地点的社区,其中允许一定程度的交互。在此框架内,我们引入了一种使用辅助交叉注意力网络(ACAN)的新型内存检索系统。该系统计算并对智能体当前状态与存储记忆之间的注意力权重进行排序,为任何给定情况选择最相关的记忆。采用一种新颖的方法,我们纳入了大语言模型的辅助,将我们模型检索到的记忆与训练期间使用基本方法提取的记忆进行比较,并基于这些比较构建一个新颖的损失函数,以有效优化训练过程。据我们所知,这是第一项利用大语言模型训练专用智能体内存检索网络的研究。
我们的实证评估表明,这种方法显著提高了内存检索的质量,从而提高了智能体在波动环境中的适应性和行为一致性。
我们的研究结果不仅为生成式智能体的内存检索引入了新的视角和方法,还扩展了大语言模型在各种人工智能智能体应用中的内存管理效用。