Department of Computer Science, Tokyo Institute of Technology, 2-12-1 O-okayama, Meguro-ku, Tokyo 152-8552, Japan.
Neural Netw. 2009 Dec;22(10):1399-410. doi: 10.1016/j.neunet.2009.01.002. Epub 2009 Jan 23.
Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy that is different from the currently optimized policy. A common approach is to use importance sampling techniques for compensating for the bias of value function estimators caused by the difference between the data-sampling policy and the target policy. However, existing off-policy methods often do not take the variance of the value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.
非策略强化学习旨在有效地利用从与当前优化策略不同的策略中收集的数据样本。一种常见的方法是使用重要性采样技术来补偿由于数据采样策略和目标策略之间的差异而导致的价值函数估计器的偏差。然而,现有的非策略方法通常没有明确考虑价值函数估计器的方差,因此它们的性能往往不稳定。为了解决这个问题,我们提出使用自适应重要性采样技术,使我们能够主动控制偏差和方差之间的权衡。我们进一步提供了一种基于交叉验证变体的最优确定权衡参数的方法。我们通过模拟证明了所提出方法的有效性。