• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有非对称连接和赫布更新的深度学习

Deep Learning With Asymmetric Connections and Hebbian Updates.

作者信息

Amit Yali

机构信息

Department of Statistics, University of Chicago, Chicago, IL, United States.

出版信息

Front Comput Neurosci. 2019 Apr 4;13:18. doi: 10.3389/fncom.2019.00018. eCollection 2019.

DOI:10.3389/fncom.2019.00018
PMID:31019458
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6458299/
Abstract

We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights-a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et al. (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights.

摘要

我们表明,深度网络可以使用赫布型更新进行训练,在具有挑战性的图像数据集上产生与普通反向传播相似的性能。为了克服反向传播中隐含的层间连接不切实际的对称性,反馈权重与前馈权重是分开的。反馈权重也使用局部规则进行更新,与前馈权重相同——权重仅根据它所连接单元的活动乘积进行更新。如利利克拉普等人(2016年)所提出的,使用固定的反馈权重时,随着网络深度的增加,性能会迅速下降。如果按照齐普瑟和鲁梅尔哈特(1990年)的提议,将前馈权重和反馈权重初始化为相同的值,那么它们在整个训练过程中都保持不变,从而精确地实现反向传播。我们表明,即使权重以不同的随机方式初始化,且算法不再执行反向传播,在具有挑战性的数据集上性能也是可比的。我们还提出了一种代价函数,其导数可以表示为最后一层上的局部赫布型更新。卷积层在空间上使用绑定权重进行更新,这在生物学上是不合理的。我们表明,对于非绑定层(也称为局部连接层),可以实现类似的性能,它对应于卷积层所隐含的连接性,但权重是不绑定的且单独更新。在线性情况下,我们从理论上表明,反馈权重的更新加速了误差收敛到零的速度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/1353d3c79e65/fncom-13-00018-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/43dc73dcdb56/fncom-13-00018-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/4ec24dea90f9/fncom-13-00018-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/fddb0c182be5/fncom-13-00018-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/bdba2b6eff62/fncom-13-00018-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/42c617f50957/fncom-13-00018-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/e310906a7951/fncom-13-00018-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/c915f3363a0a/fncom-13-00018-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/4e07ed7d6394/fncom-13-00018-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/075c667cdf9c/fncom-13-00018-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/0644c727bb54/fncom-13-00018-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/fd58a06db8ab/fncom-13-00018-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/1353d3c79e65/fncom-13-00018-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/43dc73dcdb56/fncom-13-00018-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/4ec24dea90f9/fncom-13-00018-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/fddb0c182be5/fncom-13-00018-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/bdba2b6eff62/fncom-13-00018-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/42c617f50957/fncom-13-00018-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/e310906a7951/fncom-13-00018-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/c915f3363a0a/fncom-13-00018-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/4e07ed7d6394/fncom-13-00018-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/075c667cdf9c/fncom-13-00018-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/0644c727bb54/fncom-13-00018-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/fd58a06db8ab/fncom-13-00018-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b6/6458299/1353d3c79e65/fncom-13-00018-g0012.jpg

相似文献

1
Deep Learning With Asymmetric Connections and Hebbian Updates.具有非对称连接和赫布更新的深度学习
Front Comput Neurosci. 2019 Apr 4;13:18. doi: 10.3389/fncom.2019.00018. eCollection 2019.
2
Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks.深度网络中自监督学习的生物学合理训练机制
Front Comput Neurosci. 2022 Mar 21;16:789253. doi: 10.3389/fncom.2022.789253. eCollection 2022.
3
Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates.基于信息瓶颈的赫布学习规则自然地将工作记忆与突触更新联系起来。
Front Comput Neurosci. 2024 May 16;18:1240348. doi: 10.3389/fncom.2024.1240348. eCollection 2024.
4
Direct Feedback Alignment With Sparse Connections for Local Learning.用于局部学习的具有稀疏连接的直接反馈对齐
Front Neurosci. 2019 May 24;13:525. doi: 10.3389/fnins.2019.00525. eCollection 2019.
5
Learning cortical hierarchies with temporal Hebbian updates.通过时间赫布更新学习皮层层次结构。
Front Comput Neurosci. 2023 May 24;17:1136010. doi: 10.3389/fncom.2023.1136010. eCollection 2023.
6
Unsupervised learning by competing hidden units.无监督竞争型隐单元学习。
Proc Natl Acad Sci U S A. 2019 Apr 16;116(16):7723-7731. doi: 10.1073/pnas.1820458116. Epub 2019 Mar 29.
7
Accelerating DNN Training Through Selective Localized Learning.通过选择性局部学习加速深度神经网络训练
Front Neurosci. 2022 Jan 11;15:759807. doi: 10.3389/fnins.2021.759807. eCollection 2021.
8
A theory of local learning, the learning channel, and the optimality of backpropagation.一种关于局部学习、学习通道及反向传播最优性的理论。
Neural Netw. 2016 Nov;83:51-74. doi: 10.1016/j.neunet.2016.07.006. Epub 2016 Aug 5.
9
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation.平衡传播:弥合基于能量模型与反向传播之间的差距
Front Comput Neurosci. 2017 May 4;11:24. doi: 10.3389/fncom.2017.00024. eCollection 2017.
10
Contrastive Hebbian learning with random feedback weights.对比随机反馈权重的Hebbian 学习。
Neural Netw. 2019 Jun;114:1-14. doi: 10.1016/j.neunet.2019.01.008. Epub 2019 Feb 21.

引用本文的文献

1
Effective methods and framework for energy-based local learning of deep neural networks.基于能量的深度神经网络局部学习的有效方法与框架。
Front Artif Intell. 2025 Aug 26;8:1605706. doi: 10.3389/frai.2025.1605706. eCollection 2025.
2
On the ability of standard and brain-constrained deep neural networks to support cognitive superposition: a position paper.论标准和脑约束深度神经网络支持认知叠加的能力:一篇立场文件。
Cogn Neurodyn. 2024 Dec;18(6):3383-3400. doi: 10.1007/s11571-023-10061-1. Epub 2024 Feb 4.
3
Medical prediction from missing data with max-minus negative regularized dropout.

本文引用的文献

1
Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules.行为时间尺度上的资格痕迹和可塑性:新海比尔三因素学习规则的实验支持。
Front Neural Circuits. 2018 Jul 31;12:53. doi: 10.3389/fncir.2018.00053. eCollection 2018.
2
Control of synaptic plasticity in deep cortical networks.深皮质网络中突触可塑性的控制。
Nat Rev Neurosci. 2018 Feb 16;19(3):166-180. doi: 10.1038/nrn.2018.6.
3
Towards deep learning with segregated dendrites.走向具有分离树突的深度学习。
基于最大负正则化随机失活的缺失数据医学预测
Front Neurosci. 2023 Jul 13;17:1221970. doi: 10.3389/fnins.2023.1221970. eCollection 2023.
4
Distinguishing Learning Rules with Brain Machine Interfaces.通过脑机接口区分学习规则
Adv Neural Inf Process Syst. 2022 Dec;35:25937-25950.
5
Meta-learning biologically plausible plasticity rules with random feedback pathways.用随机反馈通路进行具有生物合理性的可塑性规则的元学习。
Nat Commun. 2023 Mar 31;14(1):1805. doi: 10.1038/s41467-023-37562-1.
6
Pooling strategies in V1 can account for the functional and structural diversity across species.V1 中的池化策略可以解释不同物种之间的功能和结构多样性。
PLoS Comput Biol. 2022 Jul 21;18(7):e1010270. doi: 10.1371/journal.pcbi.1010270. eCollection 2022 Jul.
7
Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks.深度网络中自监督学习的生物学合理训练机制
Front Comput Neurosci. 2022 Mar 21;16:789253. doi: 10.3389/fncom.2022.789253. eCollection 2022.
8
A Hebbian Approach to Non-Spatial Prelinguistic Reasoning.一种用于非空间前语言推理的赫布理论方法。
Brain Sci. 2022 Feb 17;12(2):281. doi: 10.3390/brainsci12020281.
9
Cell-type-specific neuromodulation guides synaptic credit assignment in a spiking neural network.细胞类型特异性神经调节指导脉冲神经网络中的突触信用分配。
Proc Natl Acad Sci U S A. 2021 Dec 21;118(51). doi: 10.1073/pnas.2111821118.
10
Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy.深度门控赫布预测编码解释了视觉皮层层次结构中复杂神经反应特性的出现。
Front Comput Neurosci. 2021 Jul 28;15:666131. doi: 10.3389/fncom.2021.666131. eCollection 2021.
Elife. 2017 Dec 5;6:e22901. doi: 10.7554/eLife.22901.
4
Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.深度神经网络:一种用于模拟生物视觉和大脑信息处理的新框架。
Annu Rev Vis Sci. 2015 Nov 24;1:417-446. doi: 10.1146/annurev-vision-082114-035447.
5
An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.具有局部赫布突触可塑性的预测编码网络中误差反向传播算法的一种近似
Neural Comput. 2017 May;29(5):1229-1262. doi: 10.1162/NECO_a_00949. Epub 2017 Mar 23.
6
Random synaptic feedback weights support error backpropagation for deep learning.随机突触反馈权重支持深度学习的误差反向传播。
Nat Commun. 2016 Nov 8;7:13276. doi: 10.1038/ncomms13276.
7
Toward an Integration of Deep Learning and Neuroscience.迈向深度学习与神经科学的整合。
Front Comput Neurosci. 2016 Sep 14;10:94. doi: 10.3389/fncom.2016.00094. eCollection 2016.
8
Using goal-driven deep learning models to understand sensory cortex.利用目标驱动的深度学习模型理解感觉皮层。
Nat Neurosci. 2016 Mar;19(3):356-65. doi: 10.1038/nn.4244.
9
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.镜像脉冲时间依赖可塑性在脉冲神经元网络中实现自动编码器学习。
PLoS Comput Biol. 2015 Dec 3;11(12):e1004566. doi: 10.1371/journal.pcbi.1004566. eCollection 2015 Dec.
10
Recurrent network of perceptrons with three state synapses achieves competitive classification on real inputs.具有三态突触的递归感知机网络可实现对真实输入的竞争分类。
Front Comput Neurosci. 2012 Jun 22;6:39. doi: 10.3389/fncom.2012.00039. eCollection 2012.