Suppr超能文献

自主代理中的需求、痛苦和动机。

Needs, Pains, and Motivations in Autonomous Agents.

机构信息

Russ College of Electrical Engineering and Computer Science, Ohio University, Athens, OH, USA.

School of Computer Science and Management, University of Information Technology and Management, Rzeszów, Poland.

出版信息

IEEE Trans Neural Netw Learn Syst. 2017 Nov;28(11):2528-2540. doi: 10.1109/TNNLS.2016.2596787.

Abstract

This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents.This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents.

摘要

本文提出了一种具有符号 I/O 的激励学习(ML)代理的开发。我们对 ML 代理的早期工作进行了增强,赋予了它与其他代理进行交互的自主权。具体来说,我们为代理配备了驱力和痛苦,这些驱力和痛苦确立了它学习如何响应期望和不期望事件并创建相关抽象目标的动机。本文的目的是在模拟环境中探索代理的动机和记忆的自主发展。ML 代理已在 NeoAxis 游戏引擎内创建的虚拟环境中实现。此外,为了说明基于 ML 的代理的优势,我们在动态测试场景中将我们的算法与各种强化学习(RL)算法的性能进行了比较,并证明我们的 ML 代理的学习能力优于任何经过测试的 RL 代理。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验