Department of Experimental Psychology, Ghent University, Ghent, Belgium.
PLoS Comput Biol. 2019 Aug 20;15(8):e1006604. doi: 10.1371/journal.pcbi.1006604. eCollection 2019 Aug.
We provide a novel computational framework on how biological and artificial agents can learn to flexibly couple and decouple neural task modules for cognitive processing. In this way, they can address the stability-plasticity dilemma. For this purpose, we combine two prominent computational neuroscience principles, namely Binding by Synchrony and Reinforcement Learning. The model learns to synchronize task-relevant modules, while also learning to desynchronize currently task-irrelevant modules. As a result, old (but currently task-irrelevant) information is protected from overwriting (stability) while new information can be learned quickly in currently task-relevant modules (plasticity). We combine learning to synchronize with task modules that learn via one of several classical learning algorithms (Rescorla-Wagner, backpropagation, Boltzmann machines). The resulting combined model is tested on a reversal learning paradigm where it must learn to switch between three different task rules. We demonstrate that our combined model has significant computational advantages over the original network without synchrony, in terms of both stability and plasticity. Importantly, the resulting models' processing dynamics are also consistent with empirical data and provide empirically testable hypotheses for future MEG/EEG studies.
我们提供了一个新的计算框架,用于研究生物和人工代理如何学习灵活地耦合和去耦神经任务模块,以进行认知处理。通过这种方式,它们可以解决稳定性-可塑性困境。为此,我们结合了两个突出的计算神经科学原则,即同步绑定和强化学习。该模型学会了同步任务相关模块,同时也学会了去同步当前任务不相关的模块。结果,旧的(但当前任务不相关)信息受到保护,不会被覆盖(稳定性),而新的信息可以在当前任务相关的模块中快速学习(可塑性)。我们将同步学习与通过几种经典学习算法(Rescorla-Wagner、反向传播、玻尔兹曼机)学习的任务模块相结合。在一个反转学习范例中,对得到的组合模型进行了测试,该范例必须学习在三个不同的任务规则之间切换。我们证明,与没有同步的原始网络相比,我们的组合模型在稳定性和可塑性方面都具有显著的计算优势。重要的是,得到的模型的处理动力学也与经验数据一致,并为未来的 MEG/EEG 研究提供了可经验检验的假设。