AI Lab, SoftBank Robotics Europe Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, ISIR Paris, France
Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, ISIR Paris, France
Evol Comput. 2024 Sep 3;32(3):275-305. doi: 10.1162/evco_a_00343.
Learning optimal policies in sparse rewards settings is difficult as the learning agent has little to no feedback on the quality of its actions. In these situations, a good strategy is to focus on exploration, hopefully leading to the discovery of a reward signal to improve on. A learning algorithm capable of dealing with this kind of setting has to be able to (1) explore possible agent behaviors and (2) exploit any possible discovered reward. Exploration algorithms have been proposed that require the definition of a low-dimension behavior space, in which the behavior generated by the agent's policy can be represented. The need to design a priori this space such that it is worth exploring is a major limitation of these algorithms. In this work, we introduce STAX, an algorithm designed to learn a behavior space on-the-fly and to explore it while optimizing any reward discovered (see Figure 1). It does so by separating the exploration and learning of the behavior space from the exploitation of the reward through an alternating two-step process. In the first step, STAX builds a repertoire of diverse policies while learning a low-dimensional representation of the high-dimensional observations generated during the policies evaluation. In the exploitation step, emitters optimize the performance of the discovered rewarding solutions. Experiments conducted on three different sparse reward environments show that STAX performs comparably to existing baselines while requiring much less prior information about the task as it autonomously builds the behavior space it explores.
在奖励稀疏的环境中学习最优策略是很困难的,因为学习代理对其行为的质量几乎没有反馈。在这种情况下,一个好的策略是专注于探索,希望能发现一个可以改进的奖励信号。能够处理这种情况的学习算法必须能够 (1) 探索可能的代理行为,以及 (2) 利用任何可能发现的奖励。已经提出了一些探索算法,这些算法需要定义一个低维行为空间,在这个空间中可以表示代理策略生成的行为。需要预先设计这个空间,使其值得探索,这是这些算法的一个主要限制。在这项工作中,我们引入了 STAX,这是一种旨在实时学习行为空间并在优化任何发现的奖励时探索它的算法(见图 1)。它通过交替的两步过程,将行为空间的探索和学习与通过奖励的利用分离开来。在第一步中,STAX 在学习策略评估期间生成的高维观测的低维表示的同时,构建了一系列多样化的策略。在利用步骤中,发射器优化发现的奖励解决方案的性能。在三个不同的稀疏奖励环境中进行的实验表明,STAX 的性能与现有的基线相当,而对任务的先验信息要求要少得多,因为它可以自主构建它所探索的行为空间。