Department of Mathematics, Kiel Universiy, 24118 Kiel, Germany
Neural Comput. 2024 Jun 7;36(7):1424-1432. doi: 10.1162/neco_a_01668.
In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this note, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.
近年来,关于生物神经网络(BNNs)中的学习与人工神经网络中的学习有何不同,一直存在激烈的争论。人们常常认为,大脑中连接的更新仅依赖于局部信息,因此不能使用随机梯度下降类型的优化方法。在本注记中,我们研究了 BNN 中监督学习的随机模型。我们表明,当每个学习机会都通过多次局部更新处理时,大约会发生(连续的)梯度步长。这一结果表明,随机梯度下降可能确实在优化 BNNs 中发挥作用。