Dick Jeffery, Ladosz Pawel, Ben-Iwhiwhu Eseoghene, Shimadzu Hideyasu, Kinnell Peter, Pilly Praveen K, Kolouri Soheil, Soltoggio Andrea
Department of Computer Science, Loughborough University, Loughborough, United Kingdom.
Mathematical Sciences, Loughborough University, Loughborough, United Kingdom.
Front Neurorobot. 2020 Dec 23;14:578675. doi: 10.3389/fnbot.2020.578675. eCollection 2020.
The ability of an agent to detect changes in an environment is key to successful adaptation. This ability involves at least two phases: learning a model of an environment, and detecting that a change is likely to have occurred when this model is no longer accurate. This task is particularly challenging in partially observable environments, such as those modeled with partially observable Markov decision processes (POMDPs). Some predictive learners are able to infer the state from observations and thus perform better with partial observability. Predictive state representations (PSRs) and neural networks are two such tools that can be trained to predict the probabilities of future observations. However, most such existing methods focus primarily on static problems in which only one environment is learned. In this paper, we propose an algorithm that uses statistical tests to estimate the probability of different predictive models to fit the current environment. We exploit the underlying probability distributions of predictive models to provide a fast and explainable method to assess and justify the model's beliefs about the current environment. Crucially, by doing so, the method can label incoming data as fitting different models, and thus can continuously train separate models in different environments. This new method is shown to prevent catastrophic forgetting when new environments, or tasks, are encountered. The method can also be of use when AI-informed decisions require justifications because its beliefs are based on statistical evidence from observations. We empirically demonstrate the benefit of the novel method with simulations in a set of POMDP environments.
智能体检测环境变化的能力是成功适应的关键。这种能力至少涉及两个阶段:学习环境模型,以及当该模型不再准确时检测可能发生的变化。在部分可观测环境中,例如那些用部分可观测马尔可夫决策过程(POMDP)建模的环境,这项任务尤其具有挑战性。一些预测性学习者能够从观测中推断状态,因此在部分可观测性情况下表现更好。预测状态表示(PSR)和神经网络就是这样两种可以训练来预测未来观测概率的工具。然而,大多数现有的此类方法主要关注静态问题,即只学习一个环境。在本文中,我们提出一种算法,该算法使用统计检验来估计不同预测模型拟合当前环境的概率。我们利用预测模型的潜在概率分布,提供一种快速且可解释的方法来评估和证明模型对当前环境的信念。至关重要的是,通过这样做,该方法可以将传入数据标记为适合不同模型,从而可以在不同环境中持续训练单独的模型。当遇到新环境或新任务时,这种新方法被证明可以防止灾难性遗忘。当人工智能辅助决策需要理由时,该方法也可能有用,因为它的信念基于来自观测的统计证据。我们通过在一组POMDP环境中的模拟实验,实证证明了这种新方法的优势。