Suppr超能文献

通过时间赫布更新学习皮层层次结构。

Learning cortical hierarchies with temporal Hebbian updates.

作者信息

Aceituno Pau Vilimelis, Farinha Matilde Tristany, Loidl Reinhard, Grewe Benjamin F

机构信息

Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.

ETH AI Center, ETH Zurich, Zurich, Switzerland.

出版信息

Front Comput Neurosci. 2023 May 24;17:1136010. doi: 10.3389/fncom.2023.1136010. eCollection 2023.

Abstract

A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.

摘要

哺乳动物智力的一个关键驱动因素是能够在多个抽象层次上表征传入的感官信息。例如,在视觉腹侧流中,传入信号首先被表征为低级边缘滤波器,然后被转换为高级对象表征。在为对象识别任务训练的人工神经网络(ANN)中,类似的层次结构经常出现,这表明类似的结构可能是生物神经网络的基础。然而,经典的ANN训练算法——反向传播,被认为在生物学上不太合理,因此已经开发了其他生物学上合理的训练方法,如平衡传播、深度反馈控制、监督预测编码和树突误差反向传播。其中一些模型提出,通过比较顶树突和体细胞的活动为每个神经元计算局部误差。尽管如此,从神经科学的角度来看,尚不清楚神经元如何比较隔室信号。在这里,我们提出了一个解决这个问题的方案,即让顶树突反馈信号改变突触后放电率,并将其与差分赫布更新相结合,这是一种基于速率的经典发放时间依赖可塑性(STDP)版本。我们证明,这种形式的权重更新最小化了两个替代损失函数,我们证明这两个函数等同于机器学习中使用的基于误差的损失:推理延迟和所需的自上而下反馈量。此外,我们表明,差分赫布更新在其他基于反馈的深度学习框架(如预测编码或平衡传播)中同样有效。最后,我们的工作消除了深度学习中生物学上合理模型的一个关键要求,并提出了一种学习机制,该机制将解释时间赫布学习规则如何实现监督层次学习。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验