Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America.
PLoS Comput Biol. 2013;9(7):e1003150. doi: 10.1371/journal.pcbi.1003150. Epub 2013 Jul 25.
Error-driven learning rules have received considerable attention because of their close relationships to both optimal theory and neurobiological mechanisms. However, basic forms of these rules are effective under only a restricted set of conditions in which the environment is stable. Recent studies have defined optimal solutions to learning problems in more general, potentially unstable, environments, but the relevance of these complex mathematical solutions to how the brain solves these problems remains unclear. Here, we show that one such Bayesian solution can be approximated by a computationally straightforward mixture of simple error-driven 'Delta' rules. This simpler model can make effective inferences in a dynamic environment and matches human performance on a predictive-inference task using a mixture of a small number of Delta rules. This model represents an important conceptual advance in our understanding of how the brain can use relatively simple computations to make nearly optimal inferences in a dynamic world.
错误驱动的学习规则因其与最优理论和神经生物学机制的密切关系而受到广泛关注。然而,这些规则的基本形式仅在环境稳定的有限条件下有效。最近的研究已经为更一般的、潜在不稳定的环境中的学习问题定义了最优解,但这些复杂的数学解与大脑如何解决这些问题的相关性尚不清楚。在这里,我们表明,这样的贝叶斯解可以通过一种简单的、基于错误驱动的“Delta”规则的混合来近似。这个更简单的模型可以在动态环境中进行有效的推断,并且在使用少数几个 Delta 规则的混合的预测推断任务中匹配人类的表现。这个模型代表了我们理解大脑如何在动态世界中使用相对简单的计算来进行几乎最优推断的一个重要概念上的进展。