The Wellcome Trust Centre for Neuroimaging, UCL, Institute of Neurology, 12 Queen Square, London WC1N 3BG, UK.
Comput Math Methods Med. 2012;2012:937860. doi: 10.1155/2012/937860. Epub 2011 Dec 21.
It has been suggested recently that action and perception can be understood as minimising the free energy of sensory samples. This ensures that agents sample the environment to maximise the evidence for their model of the world, such that exchanges with the environment are predictable and adaptive. However, the free energy account does not invoke reward or cost-functions from reinforcement-learning and optimal control theory. We therefore ask whether reward is necessary to explain adaptive behaviour. The free energy formulation uses ideas from statistical physics to explain action in terms of minimising sensory surprise. Conversely, reinforcement-learning has its roots in behaviourism and engineering and assumes that agents optimise a policy to maximise future reward. This paper tries to connect the two formulations and concludes that optimal policies correspond to empirical priors on the trajectories of hidden environmental states, which compel agents to seek out the (valuable) states they expect to encounter.
最近有人提出,动作和感知可以被理解为最小化感官样本的自由能。这确保了代理人对环境进行采样,以最大限度地提高其世界模型的证据,从而使与环境的交互是可预测和自适应的。然而,自由能解释并不援引强化学习和最优控制理论中的奖励或成本函数。因此,我们要问奖励是否是解释适应性行为所必需的。自由能公式使用统计物理学的思想来解释动作,即通过最小化感官惊喜来解释动作。相反,强化学习根植于行为主义和工程学,并假设代理人通过优化策略来最大化未来的奖励。本文试图将这两种公式联系起来,并得出结论,最优策略对应于隐藏环境状态轨迹的经验先验,这迫使代理人去寻找他们期望遇到的(有价值的)状态。