• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

无反馈学习:固定随机学习信号实现深度神经网络的前馈训练

Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks.

作者信息

Frenkel Charlotte, Lefebvre Martin, Bol David

机构信息

Institute of Neuroinformatics, University of Zürich and ETH Zürich, Zurich, Switzerland.

ICTEAM Institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium.

出版信息

Front Neurosci. 2021 Feb 10;15:629892. doi: 10.3389/fnins.2021.629892. eCollection 2021.

DOI:10.3389/fnins.2021.629892
PMID:33642986
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7902857/
Abstract

While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder the development of low-cost adaptive smart sensors at the edge, as they severely constrain memory accesses and entail buffering overhead. In this work, we show that the one-hot-encoded labels provided in supervised classification problems, denoted as targets, can be viewed as a proxy for the error sign. Therefore, their fixed random projections enable a layerwise feedforward training of the hidden layers, thus solving the weight transport and update locking problems while relaxing the computational and memory requirements. Based on these observations, we propose the direct random target projection (DRTP) algorithm and demonstrate that it provides a tradeoff between accuracy and computational cost that is suitable for adaptive edge computing devices.

摘要

虽然误差反向传播算法能够实现深度神经网络的训练,但它意味着(i)双向突触权重传输以及(ii)在正向和反向传播完成之前进行更新锁定。这些限制不仅排除了生物学上的合理性,还阻碍了低成本自适应智能传感器在边缘设备的发展,因为它们严重限制了内存访问并带来了缓冲开销。在这项工作中,我们表明,在监督分类问题中提供的独热编码标签(表示为目标)可以被视为误差符号的代理。因此,它们的固定随机投影能够对隐藏层进行逐层前馈训练,从而解决权重传输和更新锁定问题,同时放宽计算和内存要求。基于这些观察结果,我们提出了直接随机目标投影(DRTP)算法,并证明它在准确性和计算成本之间提供了一种适合自适应边缘计算设备的权衡。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/040aaf4cec90/fnins-15-629892-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/53dd51b2ce3e/fnins-15-629892-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/5a1d98b593ee/fnins-15-629892-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/c458720970c9/fnins-15-629892-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/040aaf4cec90/fnins-15-629892-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/53dd51b2ce3e/fnins-15-629892-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/5a1d98b593ee/fnins-15-629892-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/c458720970c9/fnins-15-629892-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b1e/7902857/040aaf4cec90/fnins-15-629892-g0004.jpg

相似文献

1
Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks.无反馈学习:固定随机学习信号实现深度神经网络的前馈训练
Front Neurosci. 2021 Feb 10;15:629892. doi: 10.3389/fnins.2021.629892. eCollection 2021.
2
Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks.深度网络中自监督学习的生物学合理训练机制
Front Comput Neurosci. 2022 Mar 21;16:789253. doi: 10.3389/fncom.2022.789253. eCollection 2022.
3
Biologically plausible deep learning - But how far can we go with shallow networks?生物学上合理的深度学习——但我们可以在浅层网络中走多远?
Neural Netw. 2019 Oct;118:90-101. doi: 10.1016/j.neunet.2019.06.001. Epub 2019 Jun 20.
4
Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.通过二进制状态网络中的流水线截断误差反向传播实现硬件高效在线学习。
Front Neurosci. 2017 Sep 6;11:496. doi: 10.3389/fnins.2017.00496. eCollection 2017.
5
Meta-learning biologically plausible plasticity rules with random feedback pathways.用随机反馈通路进行具有生物合理性的可塑性规则的元学习。
Nat Commun. 2023 Mar 31;14(1):1805. doi: 10.1038/s41467-023-37562-1.
6
Deep Supervised Learning Using Local Errors.使用局部误差的深度监督学习
Front Neurosci. 2018 Aug 31;12:608. doi: 10.3389/fnins.2018.00608. eCollection 2018.
7
Contrastive Hebbian learning with random feedback weights.对比随机反馈权重的Hebbian 学习。
Neural Netw. 2019 Jun;114:1-14. doi: 10.1016/j.neunet.2019.01.008. Epub 2019 Feb 21.
8
Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates.基于信息瓶颈的赫布学习规则自然地将工作记忆与突触更新联系起来。
Front Comput Neurosci. 2024 May 16;18:1240348. doi: 10.3389/fncom.2024.1240348. eCollection 2024.
9
On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices.使用带有模拟突触器件的近似反向传播的片上训练脉冲神经网络。
Front Neurosci. 2020 Jul 7;14:423. doi: 10.3389/fnins.2020.00423. eCollection 2020.
10
Direct Feedback Alignment With Sparse Connections for Local Learning.用于局部学习的具有稀疏连接的直接反馈对齐
Front Neurosci. 2019 May 24;13:525. doi: 10.3389/fnins.2019.00525. eCollection 2019.

引用本文的文献

1
Self-Contrastive Forward-Forward algorithm.自对比前向-前向算法
Nat Commun. 2025 Jul 1;16(1):5978. doi: 10.1038/s41467-025-61037-0.
2
A Learning Probabilistic Boolean Network Model of a Manufacturing Process with Applications in System Asset Maintenance.一种用于制造过程的学习概率布尔网络模型及其在系统资产维护中的应用。
Entropy (Basel). 2025 Apr 25;27(5):463. doi: 10.3390/e27050463.
3
Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates.基于信息瓶颈的赫布学习规则自然地将工作记忆与突触更新联系起来。

本文引用的文献

1
Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE).深度连续局部学习(DECOLLE)的突触可塑性动力学
Front Neurosci. 2020 May 12;14:424. doi: 10.3389/fnins.2020.00424. eCollection 2020.
2
Synaptic Plasticity Forms and Functions.突触可塑性的形式和功能。
Annu Rev Neurosci. 2020 Jul 8;43:95-117. doi: 10.1146/annurev-neuro-090919-022842. Epub 2020 Feb 19.
3
MorphIC: A 65-nm 738k-Synapse/mm Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning.MorphIC:具有随机尖峰驱动在线学习功能的 65nm 738k 突触/mm 四核二进制权数字神经形态处理器。
Front Comput Neurosci. 2024 May 16;18:1240348. doi: 10.3389/fncom.2024.1240348. eCollection 2024.
4
Training an Ising machine with equilibrium propagation.使用平衡传播训练伊辛机。
Nat Commun. 2024 Apr 30;15(1):3671. doi: 10.1038/s41467-024-46879-4.
5
Biologically plausible local synaptic learning rules robustly implement deep supervised learning.具有生物学合理性的局部突触学习规则有力地实现了深度监督学习。
Front Neurosci. 2023 Oct 11;17:1160899. doi: 10.3389/fnins.2023.1160899. eCollection 2023.
6
Neuromorphic artificial intelligence systems.神经形态人工智能系统。
Front Neurosci. 2022 Sep 14;16:959626. doi: 10.3389/fnins.2022.959626. eCollection 2022.
7
Introducing principles of synaptic integration in the optimization of deep neural networks.介绍突触整合原理在深度神经网络优化中的应用。
Nat Commun. 2022 Apr 7;13(1):1885. doi: 10.1038/s41467-022-29491-2.
8
A Sparsity-Driven Backpropagation-Less Learning Framework Using Populations of Spiking Growth Transform Neurons.一种使用脉冲增长变换神经元群体的稀疏驱动无反向传播学习框架。
Front Neurosci. 2021 Jul 28;15:715451. doi: 10.3389/fnins.2021.715451. eCollection 2021.
IEEE Trans Biomed Circuits Syst. 2019 Oct;13(5):999-1010. doi: 10.1109/TBCAS.2019.2928793. Epub 2019 Jul 15.
4
Large-Scale Neuromorphic Spiking Array Processors: A Quest to Mimic the Brain.大规模神经形态脉冲阵列处理器:对模仿大脑的探索。
Front Neurosci. 2018 Dec 3;12:891. doi: 10.3389/fnins.2018.00891. eCollection 2018.
5
A 0.086-mm 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS.在 28nmCMOS 中,实现了一款 0.086mm²、12.7pJ/SOP、64k 突触、256 神经元、在线学习、数字尖峰神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Feb;13(1):145-158. doi: 10.1109/TBCAS.2018.2880425. Epub 2018 Nov 9.
6
Deep Supervised Learning Using Local Errors.使用局部误差的深度监督学习
Front Neurosci. 2018 Aug 31;12:608. doi: 10.3389/fnins.2018.00608. eCollection 2018.
7
Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning.神经与突触阵列收发器:一种用于嵌入式学习的受大脑启发的计算框架。
Front Neurosci. 2018 Aug 29;12:583. doi: 10.3389/fnins.2018.00583. eCollection 2018.
8
Learning in the Machine: Random Backpropagation and the Deep Learning Channel.机器中的学习:随机反向传播与深度学习通道
Artif Intell. 2018 Jul;260:1-35. doi: 10.1016/j.artint.2018.03.003. Epub 2018 Apr 3.
9
Towards deep learning with segregated dendrites.走向具有分离树突的深度学习。
Elife. 2017 Dec 5;6:e22901. doi: 10.7554/eLife.22901.
10
Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System.使用混合信号模拟/数字神经形态处理系统的机器人导航避障与目标获取
Front Neurorobot. 2017 Jul 11;11:28. doi: 10.3389/fnbot.2017.00028. eCollection 2017.