Suppr超能文献

通过大幅降低梯度估计偏差将平衡传播扩展到深度卷积神经网络

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias.

作者信息

Laborieux Axel, Ernoult Maxence, Scellier Benjamin, Bengio Yoshua, Grollier Julie, Querlioz Damien

机构信息

Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France.

Unité Mixte de Physique, CNRS, Thales, Université Paris-Saclay, Palaiseau, France.

出版信息

Front Neurosci. 2021 Feb 18;15:633674. doi: 10.3389/fnins.2021.633674. eCollection 2021.

Abstract

Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operates in two phases, during which the network is let to evolve freely and then "nudged" toward a target; the weights of the network are then updated based solely on the states of the neurons that they connect. The weight updates of Equilibrium Propagation have been shown mathematically to approach those provided by Backpropagation Through Time (BPTT), the mainstream approach to train recurrent neural networks, when nudging is performed with infinitely small strength. In practice, however, the standard implementation of Equilibrium Propagation does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of equilibrium propagation, inherent in the use of finite nudging, is responsible for this phenomenon and that canceling it allows training deep convolutional neural networks. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize Equilibrium Propagation to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard Equilibrium Propagation that gives 86% test error. We also apply these techniques to train an architecture with unidirectional forward and backward connections, yielding a 13.2% test error. These results highlight equilibrium propagation as a compelling biologically-plausible approach to compute error gradients in deep neuromorphic systems.

摘要

平衡传播是一种受生物启发的算法,它使用局部学习规则训练收敛递归神经网络。这种方法是实现具有学习能力的神经形态系统的一项重大突破,并具有强大的理论保障。平衡传播分两个阶段运行,在此期间网络可自由演化,然后“微调”至目标;网络权重随后仅根据它们所连接神经元的状态进行更新。数学证明,当以无限小的强度进行微调时,平衡传播的权重更新接近通过时间反向传播(BPTT)(训练递归神经网络的主流方法)所提供的更新。然而在实践中,平衡传播的标准实现无法扩展到比MNIST更难的视觉任务。在这项工作中,我们表明,有限微调中固有的平衡传播梯度估计偏差是造成这种现象的原因,消除该偏差可实现深度卷积神经网络的训练。我们表明,使用对称微调(正向和负向微调)可大幅减少这种偏差。我们还将平衡传播推广到交叉熵损失的情况(与平方误差相对)。由于这些进展,我们在CIFAR-10上实现了11.7%的测试误差,接近BPTT所达到的误差,并相对于给出86%测试误差的标准平衡传播有了重大改进。我们还应用这些技术训练了一种具有单向向前和向后连接的架构,测试误差为13.2%。这些结果突出了平衡传播作为一种在深度神经形态系统中计算误差梯度的引人注目的生物学上合理的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9b2b/7930909/0549f13bc48b/fnins-15-633674-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验