• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

仅通过使用输入相关性就显著提高了时间序列学习的稳定性并加快了收敛速度。

Strongly improved stability and faster convergence of temporal sequence learning by using input correlations only.

作者信息

Porr Bernd, Wörgötter Florentin

机构信息

Department of Electronics and Electrical Engineering, University of Glasgow, Glasgow, GT12 8LT, Scotland.

出版信息

Neural Comput. 2006 Jun;18(6):1380-412. doi: 10.1162/neco.2006.18.6.1380.

DOI:10.1162/neco.2006.18.6.1380
PMID:16764508
Abstract

Currently all important, low-level, unsupervised network learning algorithms follow the paradigm of Hebb, where input and output activity are correlated to change the connection strength of a synapse. However, as a consequence, classical Hebbian learning always carries a potentially destabilizing autocorrelation term, which is due to the fact that every input is in a weighted form reflected in the neuron's output. This self-correlation can lead to positive feedback, where increasing weights will increase the output, and vice versa, which may result in divergence. This can be avoided by different strategies like weight normalization or weight saturation, which, however, can cause different problems. Consequently, in most cases, high learning rates cannot be used for Hebbian learning, leading to relatively slow convergence. Here we introduce a novel correlation-based learning rule that is related to our isotropic sequence order (ISO) learning rule (Porr & Wörgötter, 2003a), but replaces the derivative of the output in the learning rule with the derivative of the reflex input. Hence, the new rule uses input correlations only, effectively implementing strict heterosynaptic learning. This looks like a minor modification but leads to dramatically improved properties. Elimination of the output from the learning rule removes the unwanted, destabilizing autocorrelation term, allowing us to use high learning rates. As a consequence, we can mathematically show that the theoretical optimum of one-shot learning can be reached under ideal conditions with the new rule. This result is then tested against four different experimental setups, and we will show that in all of them, very few (and sometimes only one) learning experiences are needed to achieve the learning goal. As a consequence, the new learning rule is up to 100 times faster and in general more stable than ISO learning.

摘要

目前,所有重要的、低级的、无监督网络学习算法都遵循赫布范式,即输入和输出活动相互关联,以改变突触的连接强度。然而,结果是,经典的赫布学习总是带有一个潜在的不稳定自相关项,这是因为每个输入都以加权形式反映在神经元的输出中。这种自相关会导致正反馈,即权重增加会增加输出,反之亦然,这可能导致发散。这可以通过不同的策略来避免,如权重归一化或权重饱和,然而,这可能会导致不同的问题。因此,在大多数情况下,赫布学习不能使用高学习率,导致收敛相对较慢。在这里,我们引入了一种新的基于相关性的学习规则,它与我们的各向同性序列顺序(ISO)学习规则(Porr & Wörgötter,2003a)相关,但在学习规则中用反射输入的导数代替了输出的导数。因此,新规则仅使用输入相关性,有效地实现了严格的异突触学习。这看起来像是一个小修改,但却带来了显著改善的特性。从学习规则中消除输出消除了不需要的、不稳定的自相关项,使我们能够使用高学习率。因此,我们可以从数学上证明,在理想条件下,新规则可以达到一次性学习的理论最优值。然后针对四种不同的实验设置对这一结果进行了测试,我们将表明,在所有这些设置中,只需很少(有时只需一次)的学习经验就能实现学习目标。因此,新的学习规则比ISO学习快100倍,并且总体上更稳定。

相似文献

1
Strongly improved stability and faster convergence of temporal sequence learning by using input correlations only.仅通过使用输入相关性就显著提高了时间序列学习的稳定性并加快了收敛速度。
Neural Comput. 2006 Jun;18(6):1380-412. doi: 10.1162/neco.2006.18.6.1380.
2
Hebbian errors in learning: an analysis using the Oja model.学习中的赫布错误:使用奥贾模型的分析
J Theor Biol. 2009 Jun 21;258(4):489-501. doi: 10.1016/j.jtbi.2009.01.036. Epub 2009 Feb 25.
3
Fast heterosynaptic learning in a robot food retrieval task inspired by the limbic system.受边缘系统启发的机器人食物检索任务中的快速异突触学习。
Biosystems. 2007 May-Jun;89(1-3):294-9. doi: 10.1016/j.biosystems.2006.04.026. Epub 2006 Nov 15.
4
Isotropic sequence order learning.各向同性序列顺序学习。
Neural Comput. 2003 Apr;15(4):831-64. doi: 10.1162/08997660360581921.
5
Learning with "relevance": using a third factor to stabilize Hebbian learning.基于“相关性”的学习:利用第三个因素来稳定赫布学习。
Neural Comput. 2007 Oct;19(10):2694-719. doi: 10.1162/neco.2007.19.10.2694.
6
Intrinsic stabilization of output rates by spike-based Hebbian learning.基于脉冲的赫布学习对输出率的内在稳定作用。
Neural Comput. 2001 Dec;13(12):2709-41. doi: 10.1162/089976601317098501.
7
Learning only when necessary: better memories of correlated patterns in networks with bounded synapses.仅在必要时学习:具有有限突触的网络中对相关模式的更好记忆。
Neural Comput. 2005 Oct;17(10):2106-38. doi: 10.1162/0899766054615644.
8
Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison.神经元时间差分规则和微分赫布学习的数学性质:比较
Biol Cybern. 2008 Mar;98(3):259-72. doi: 10.1007/s00422-007-0209-6. Epub 2008 Jan 15.
9
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
10
Isotropic-sequence-order learning in a closed-loop behavioural system.闭环行为系统中的各向同性序列顺序学习
Philos Trans A Math Phys Eng Sci. 2003 Oct 15;361(1811):2225-44. doi: 10.1098/rsta.2003.1273.

引用本文的文献

1
Temperature stabilization with Hebbian learning using an autonomous optoelectronic dendritic unit.利用自主光电树突状单元通过赫布学习实现温度稳定。
Front Optoelectron. 2025 Apr 3;18(1):7. doi: 10.1007/s12200-025-00151-9.
2
Self-configuring feedback loops for sensorimotor control.自配置反馈回路用于感觉运动控制。
Elife. 2022 Nov 14;11:e77216. doi: 10.7554/eLife.77216.
3
Learning multisensory cue integration: A computational model of crossmodal synaptic plasticity enables reliability-based cue weighting by capturing stimulus statistics.学习多感觉线索整合:通过捕获刺激统计信息,实现基于可靠性的线索加权,这是一种跨模态突触可塑性的计算模型。
Front Neural Circuits. 2022 Aug 8;16:921453. doi: 10.3389/fncir.2022.921453. eCollection 2022.
4
Differential Hebbian learning with time-continuous signals for active noise reduction.时变信号的差分Hebbian 学习在主动降噪中的应用。
PLoS One. 2022 May 26;17(5):e0266679. doi: 10.1371/journal.pone.0266679. eCollection 2022.
5
Neural Control and Online Learning for Speed Adaptation of Unmanned Aerial Vehicles.用于无人机速度自适应的神经控制和在线学习。
Front Neural Circuits. 2022 Apr 25;16:839361. doi: 10.3389/fncir.2022.839361. eCollection 2022.
6
The SMOOTH-Robot: A Modular, Interactive Service Robot.SMOOTH机器人:一种模块化的交互式服务机器人。
Front Robot AI. 2021 Oct 5;8:645639. doi: 10.3389/frobt.2021.645639. eCollection 2021.
7
Draculab: A Python Simulator for Firing Rate Neural Networks With Delayed Adaptive Connections.Draculab:一种用于具有延迟自适应连接的 firing rate 神经网络的 Python 模拟器。
Front Neuroinform. 2019 Apr 2;13:18. doi: 10.3389/fninf.2019.00018. eCollection 2019.
8
General differential Hebbian learning: Capturing temporal relations between events in neural networks and the brain.一般微分Hebbian 学习:在神经网络和大脑中捕获事件之间的时间关系。
PLoS Comput Biol. 2018 Aug 28;14(8):e1006227. doi: 10.1371/journal.pcbi.1006227. eCollection 2018 Aug.
9
An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity.一种具有可变稀疏性的声学运动感知自适应神经机制。
Front Neurorobot. 2017 Mar 9;11:11. doi: 10.3389/fnbot.2017.00011. eCollection 2017.
10
Neuromodulatory adaptive combination of correlation-based learning in cerebellum and reward-based learning in basal ganglia for goal-directed behavior control.小脑基于相关性学习与基底神经节基于奖励学习的神经调节适应性组合,用于目标导向行为控制。
Front Neural Circuits. 2014 Oct 28;8:126. doi: 10.3389/fncir.2014.00126. eCollection 2014.