Building 176 Boldrewood Innovation Campus, University of Southampton, Burgess Road, Southampton SO16 7QF, United Kingdom.
Neural Netw. 2019 Jul;115:30-49. doi: 10.1016/j.neunet.2019.03.006. Epub 2019 Mar 25.
Increasingly, autonomous agents will be required to operate on long-term missions. This will create a demand for general intelligence because feedback from a human operator may be sparse and delayed, and because not all behaviours can be prescribed. Deep neural networks and reinforcement learning methods can be applied in such environments but their fixed updating routines imply an inductive bias in learning spatio-temporal patterns, meaning some environments will be unsolvable. To address this problem, this paper proposes active adaptive perception, the ability of an architecture to learn when and how to modify and selectively utilise its perception module. To achieve this, a generic architecture based on a self-modifying policy (SMP) is proposed, and implemented using Incremental Self-improvement with the Success Story Algorithm. The architecture contrasts to deep reinforcement learning systems which follow fixed training strategies and earlier SMP studies which for perception relied either entirely on the working memory or on untrainable active perception instructions. One computationally cheap and one more expensive implementation are presented and compared to DRQN, an off-policy deep reinforcement learner using experience replay and Incremental Self-improvement, an SMP, on various non-episodic partially observable mazes. The results show that the simple instruction set leads to emergent strategies to avoid detracting corridors and rooms, and that the expensive implementation allows selectively ignoring perception where it is inaccurate.
越来越多的自主代理将需要执行长期任务。这将产生对通用智能的需求,因为来自人类操作员的反馈可能稀疏且延迟,并且并非所有行为都可以规定。深度神经网络和强化学习方法可以在这种环境中应用,但它们的固定更新例程意味着在学习时空模式方面存在归纳偏差,这意味着某些环境是无法解决的。为了解决这个问题,本文提出了主动自适应感知,即架构学习何时以及如何修改和有选择地利用其感知模块的能力。为此,提出了一种基于自修改策略(SMP)的通用架构,并使用带有成功故事算法的增量自我改进来实现。该架构与遵循固定训练策略的深度强化学习系统以及早期完全依赖工作记忆或不可训练的主动感知指令的 SMP 研究形成对比。提出并比较了一种计算成本较低和一种计算成本较高的实现方式,以及使用经验重放和增量自我改进的 off-policy 深度强化学习者 DRQN,用于各种非情节部分可观察的迷宫。结果表明,简单的指令集导致了避免干扰走廊和房间的突发策略,而昂贵的实现方式允许选择性地忽略不准确的感知。