• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用局部误差的深度监督学习

Deep Supervised Learning Using Local Errors.

作者信息

Mostafa Hesham, Ramesh Vishwajith, Cauwenberghs Gert

机构信息

Institute for Neural Computation, University of California, San Diego, San Diego, CA, United States.

Department of Bioengineering, University of California, San Diego, San Diego, CA, United States.

出版信息

Front Neurosci. 2018 Aug 31;12:608. doi: 10.3389/fnins.2018.00608. eCollection 2018.

DOI:10.3389/fnins.2018.00608
PMID:30233295
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6127296/
Abstract

Error backpropagation is a highly effective mechanism for learning high-quality hierarchical features in deep networks. Updating the features or weights in one layer, however, requires waiting for the propagation of error signals from higher layers. Learning using delayed and non-local errors makes it hard to reconcile backpropagation with the learning mechanisms observed in biological neural networks as it requires the neurons to maintain a memory of the input long enough until the higher-layer errors arrive. In this paper, we propose an alternative learning mechanism where errors are generated locally in each layer using fixed, random auxiliary classifiers. Lower layers could thus be trained independently of higher layers and training could either proceed layer by layer, or simultaneously in all layers using local error information. We address biological plausibility concerns such as weight symmetry requirements and show that the proposed learning mechanism based on fixed, broad, and random tuning of each neuron to the classification categories outperforms the biologically-motivated feedback alignment learning technique on the CIFAR10 dataset, approaching the performance of standard backpropagation. Our approach highlights a potential biological mechanism for the supervised, or task-dependent, learning of feature hierarchies. In addition, we show that it is well suited for learning deep networks in custom hardware where it can drastically reduce memory traffic and data communication overheads. Code used to run all learning experiments is available under https://gitlab.com/hesham-mostafa/learning-using-local-erros.git.

摘要

误差反向传播是一种在深度网络中学习高质量层次特征的高效机制。然而,更新某一层的特征或权重需要等待来自更高层的误差信号传播过来。使用延迟和非局部误差进行学习使得反向传播难以与生物神经网络中观察到的学习机制相协调,因为这要求神经元将输入的记忆保持足够长的时间,直到更高层的误差到来。在本文中,我们提出了一种替代的学习机制,其中误差在每一层中使用固定的随机辅助分类器进行局部生成。因此,较低层可以独立于较高层进行训练,训练可以逐层进行,或者使用局部误差信息在所有层中同时进行。我们解决了诸如权重对称性要求等生物学合理性问题,并表明基于对每个神经元进行固定、广泛和随机调谐以适应分类类别的所提出的学习机制在CIFAR10数据集上优于具有生物学动机的反馈对齐学习技术,接近标准反向传播的性能。我们的方法突出了一种用于监督学习或任务依赖学习特征层次结构的潜在生物学机制。此外,我们表明它非常适合在定制硬件中学习深度网络,在那里它可以大幅减少内存流量和数据通信开销。用于运行所有学习实验的代码可在https://gitlab.com/hesham-mostafa/learning-using-local-erros.git获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/669449f48741/fnins-12-00608-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/6898988db926/fnins-12-00608-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/5e4d6308752f/fnins-12-00608-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/8ced4a48a997/fnins-12-00608-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/824852c9f150/fnins-12-00608-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/0936f1c95632/fnins-12-00608-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/669449f48741/fnins-12-00608-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/6898988db926/fnins-12-00608-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/5e4d6308752f/fnins-12-00608-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/8ced4a48a997/fnins-12-00608-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/824852c9f150/fnins-12-00608-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/0936f1c95632/fnins-12-00608-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7f7e/6127296/669449f48741/fnins-12-00608-g0006.jpg

相似文献

1
Deep Supervised Learning Using Local Errors.使用局部误差的深度监督学习
Front Neurosci. 2018 Aug 31;12:608. doi: 10.3389/fnins.2018.00608. eCollection 2018.
2
Biologically plausible deep learning - But how far can we go with shallow networks?生物学上合理的深度学习——但我们可以在浅层网络中走多远?
Neural Netw. 2019 Oct;118:90-101. doi: 10.1016/j.neunet.2019.06.001. Epub 2019 Jun 20.
3
Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.通过二进制状态网络中的流水线截断误差反向传播实现硬件高效在线学习。
Front Neurosci. 2017 Sep 6;11:496. doi: 10.3389/fnins.2017.00496. eCollection 2017.
4
Direct Feedback Alignment With Sparse Connections for Local Learning.用于局部学习的具有稀疏连接的直接反馈对齐
Front Neurosci. 2019 May 24;13:525. doi: 10.3389/fnins.2019.00525. eCollection 2019.
5
Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks.深度网络中自监督学习的生物学合理训练机制
Front Comput Neurosci. 2022 Mar 21;16:789253. doi: 10.3389/fncom.2022.789253. eCollection 2022.
6
Unsupervised learning by competing hidden units.无监督竞争型隐单元学习。
Proc Natl Acad Sci U S A. 2019 Apr 16;116(16):7723-7731. doi: 10.1073/pnas.1820458116. Epub 2019 Mar 29.
7
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
8
Learning cortical hierarchies with temporal Hebbian updates.通过时间赫布更新学习皮层层次结构。
Front Comput Neurosci. 2023 May 24;17:1136010. doi: 10.3389/fncom.2023.1136010. eCollection 2023.
9
Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates.基于信息瓶颈的赫布学习规则自然地将工作记忆与突触更新联系起来。
Front Comput Neurosci. 2024 May 16;18:1240348. doi: 10.3389/fncom.2024.1240348. eCollection 2024.
10
Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks.无反馈学习:固定随机学习信号实现深度神经网络的前馈训练
Front Neurosci. 2021 Feb 10;15:629892. doi: 10.3389/fnins.2021.629892. eCollection 2021.

引用本文的文献

1
Contrastive signal-dependent plasticity: Self-supervised learning in spiking neural circuits.对比信号依赖可塑性:尖峰神经网络电路中的自监督学习。
Sci Adv. 2024 Oct 25;10(43):eadn6076. doi: 10.1126/sciadv.adn6076. Epub 2024 Oct 23.
2
Learnable Leakage and Onset-Spiking Self-Attention in SNNs with Local Error Signals.具有局部误差信号的脉冲神经网络中的可学习泄漏和起始尖峰自注意力
Sensors (Basel). 2023 Dec 12;23(24):9781. doi: 10.3390/s23249781.
3
Efficient training of spiking neural networks with temporally-truncated local backpropagation through time.

本文引用的文献

1
NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps.零跳:一种基于特征图稀疏表示的灵活卷积神经网络加速器。
IEEE Trans Neural Netw Learn Syst. 2019 Mar;30(3):644-656. doi: 10.1109/TNNLS.2018.2852335. Epub 2018 Jul 26.
2
Learning in the Machine: Random Backpropagation and the Deep Learning Channel.机器中的学习:随机反向传播与深度学习通道
Artif Intell. 2018 Jul;260:1-35. doi: 10.1016/j.artint.2018.03.003. Epub 2018 Apr 3.
3
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.
通过时间上截断的局部反向传播对脉冲神经网络进行高效训练。
Front Neurosci. 2023 Apr 6;17:1047008. doi: 10.3389/fnins.2023.1047008. eCollection 2023.
4
Decoupled neural network training with re-computation and weight prediction.解耦神经网络训练与重新计算和权重预测。
PLoS One. 2023 Feb 23;18(2):e0276427. doi: 10.1371/journal.pone.0276427. eCollection 2023.
5
Neuromorphic artificial intelligence systems.神经形态人工智能系统。
Front Neurosci. 2022 Sep 14;16:959626. doi: 10.3389/fnins.2022.959626. eCollection 2022.
6
BlocTrain: Block-Wise Conditional Training and Inference for Efficient Spike-Based Deep Learning.BlocTrain:用于高效基于脉冲的深度学习的逐块条件训练与推理
Front Neurosci. 2021 Oct 29;15:603433. doi: 10.3389/fnins.2021.603433. eCollection 2021.
7
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits.突发依赖性突触可塑性可以协调分层电路中的学习。
Nat Neurosci. 2021 Jul;24(7):1010-1019. doi: 10.1038/s41593-021-00857-x. Epub 2021 May 13.
8
Can the Brain Do Backpropagation? -Exact Implementation of Backpropagation in Predictive Coding Networks.大脑能进行反向传播吗?——预测编码网络中反向传播的精确实现。
Adv Neural Inf Process Syst. 2020;33:22566-22579.
9
Learning to Approximate Functions Using Nb-Doped SrTiO Memristors.利用掺铌钛酸锶忆阻器学习近似函数。
Front Neurosci. 2021 Feb 19;14:627276. doi: 10.3389/fnins.2020.627276. eCollection 2020.
10
Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks.无反馈学习:固定随机学习信号实现深度神经网络的前馈训练
Front Neurosci. 2021 Feb 10;15:629892. doi: 10.3389/fnins.2021.629892. eCollection 2021.
超级脉冲:多层脉冲神经网络中的监督学习
Neural Comput. 2018 Jun;30(6):1514-1541. doi: 10.1162/neco_a_01086. Epub 2018 Apr 13.
4
Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.通过二进制状态网络中的流水线截断误差反向传播实现硬件高效在线学习。
Front Neurosci. 2017 Sep 6;11:496. doi: 10.3389/fnins.2017.00496. eCollection 2017.
5
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.基于脉冲神经网络中时间编码的监督学习。
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3227-3235. doi: 10.1109/TNNLS.2017.2726060. Epub 2017 Aug 1.
6
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.事件驱动的随机反向传播:助力神经形态深度学习机器
Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.
7
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation.平衡传播:弥合基于能量模型与反向传播之间的差距
Front Comput Neurosci. 2017 May 4;11:24. doi: 10.3389/fncom.2017.00024. eCollection 2017.
8
Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.具有动态脉冲神经元和固定反馈权重的深度学习
Neural Comput. 2017 Mar;29(3):578-602. doi: 10.1162/NECO_a_00929. Epub 2017 Jan 17.
9
Random synaptic feedback weights support error backpropagation for deep learning.随机突触反馈权重支持深度学习的误差反向传播。
Nat Commun. 2016 Nov 8;7:13276. doi: 10.1038/ncomms13276.
10
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.