Suau Miguel, He Jinke, Congeduti Elena, Starre Rolf A N, Czechowski Aleksander, Oliehoek Frans A
Intelligent Systems, Delft University of Technology, Delft, The Netherlands.
Neural Comput Appl. 2025;37(19):13145-13161. doi: 10.1007/s00521-022-07691-7. Epub 2022 Sep 4.
Due to its perceptual limitations, an agent may have too little information about the environment to act optimally. In such cases, it is important to keep track of the action-observation history to uncover hidden state information. Recent deep reinforcement learning methods use recurrent neural networks (RNN) to memorize past observations. However, these models are expensive to train and have convergence difficulties, especially when dealing with high dimensional data. In this paper, we propose , a theoretically inspired memory architecture that alleviates the training difficulties by restricting the input of the recurrent layers to those variables that influence the hidden state information. Moreover, as opposed to standard RNNs, in which every piece of information used for estimating values is inevitably fed back into the network for the next prediction, our model allows information to flow without being necessarily stored in the RNN's internal memory. Results indicate that, by letting the recurrent layers focus on a small fraction of the observation variables while processing the rest of the information with a feedforward neural network, we can outperform standard recurrent architectures both in training speed and policy performance. This approach also reduces runtime and obtains better scores than methods that stack multiple observations to remove partial observability.
由于其感知限制,智能体可能对环境了解的信息过少,无法实现最优行动。在这种情况下,跟踪行动-观察历史以揭示隐藏状态信息很重要。最近的深度强化学习方法使用循环神经网络(RNN)来记忆过去的观察结果。然而,这些模型训练成本高且存在收敛困难,尤其是在处理高维数据时。在本文中,我们提出了一种受理论启发的记忆架构,通过将循环层的输入限制为那些影响隐藏状态信息的变量来缓解训练困难。此外,与标准RNN不同,在标准RNN中,用于估计值的每条信息都不可避免地反馈回网络进行下一次预测,而我们的模型允许信息流动,而不必存储在RNN的内部记忆中。结果表明,通过让循环层专注于一小部分观察变量,同时用前馈神经网络处理其余信息,我们在训练速度和策略性能方面都可以优于标准循环架构。这种方法还减少了运行时间,并且比堆叠多个观察结果以消除部分可观测性的方法获得更好的分数。