• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

为什么相似性匹配目标会导致赫布式/反赫布式网络?

Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks?

作者信息

Pehlevan Cengiz, Sengupta Anirvan M, Chklovskii Dmitri B

机构信息

Center for Computational Biology, Flatiron Institute, New York, NY 10010, U.S.A.

Center for Computational Biology, Flatiron Institute, New York, NY 10010, U.S.A., and Physics and Astronomy Department, Rutgers University, New Brunswick, NJ 08901, U.S.A.

出版信息

Neural Comput. 2018 Jan;30(1):84-124. doi: 10.1162/neco_a_01018. Epub 2017 Sep 28.

DOI:10.1162/neco_a_01018
PMID:28957017
Abstract

Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.

摘要

利用赫布型和反赫布型可塑性对神经网络进行自组织建模以实现无监督学习,在神经科学领域有着悠久的历史。然而,直到最近随着相似性匹配目标的引入,才使得从有原则的优化目标推导出具有此类局部学习规则的单层网络成为可能。相似性匹配目标在推导具有局部学习规则的神经网络时取得成功的原因是什么呢?在此,以降维为例,我们引入了几个变量替换,这些替换揭示了相似性匹配的成功之处。我们表明,无论是在离线还是在线设置中,完整的网络目标都可以使用局部学习规则针对每个突触分别进行优化。我们通过制定一个极小极大优化问题,将赫布型规则和反赫布型规则之间长期存在的竞争直觉形式化。我们引入了一种使用分数矩阵指数的新型降维目标。为了说明我们方法的通用性,我们将其应用于一种结合白化的新型降维公式。我们通过数值验证,从有原则目标推导学习规则的网络比具有启发式学习规则的网络表现更好。

相似文献

1
Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks?为什么相似性匹配目标会导致赫布式/反赫布式网络?
Neural Comput. 2018 Jan;30(1):84-124. doi: 10.1162/neco_a_01018. Epub 2017 Sep 28.
2
A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data.用于线性子空间学习的赫布型/反赫布型神经网络:基于流数据多维缩放的推导
Neural Comput. 2015 Jul;27(7):1461-95. doi: 10.1162/NECO_a_00745. Epub 2015 May 14.
3
Effective neuronal learning with ineffective Hebbian learning rules.用无效的赫布学习规则实现有效的神经元学习。
Neural Comput. 2001 Apr;13(4):817-40. doi: 10.1162/089976601300014367.
4
Hebbian errors in learning: an analysis using the Oja model.学习中的赫布错误:使用奥贾模型的分析
J Theor Biol. 2009 Jun 21;258(4):489-501. doi: 10.1016/j.jtbi.2009.01.036. Epub 2009 Feb 25.
5
A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.关于赫布学习规则对离散时间随机递归神经网络的动力学和结构影响的数学分析。
Neural Comput. 2008 Dec;20(12):2937-66. doi: 10.1162/neco.2008.05-07-530.
6
Dimensional reduction for reward-based learning.基于奖励学习的降维
Network. 2006 Sep;17(3):235-52. doi: 10.1080/09548980600773215.
7
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions.在不断变化的环境条件下自主学习的进化可塑性。
Evol Comput. 2021 Sep 1;29(3):391-414. doi: 10.1162/evco_a_00286.
8
An n-level field theory of biological neural networks.生物神经网络的n层场论。
J Math Biol. 1993;31(8):771-95. doi: 10.1007/BF00168045.
9
Reinforcement learning by Hebbian synapses with adaptive thresholds.具有自适应阈值的赫布突触强化学习。
Neuroscience. 1997 Nov;81(2):303-19. doi: 10.1016/s0306-4522(97)00118-8.
10
Anti-Hebbian learning in a non-linear neural network.非线性神经网络中的反赫布学习
Biol Cybern. 1990;64(2):171-6. doi: 10.1007/BF02331347.

引用本文的文献

1
Optimal sparsity in autoencoder memory models of the hippocampus.海马体自动编码器记忆模型中的最优稀疏性。
bioRxiv. 2025 Jan 6:2025.01.06.631574. doi: 10.1101/2025.01.06.631574.
2
Synapse-type-specific competitive Hebbian learning forms functional recurrent networks.突触类型特异性竞争性赫布学习形成功能性循环网络。
Proc Natl Acad Sci U S A. 2024 Jun 18;121(25):e2305326121. doi: 10.1073/pnas.2305326121. Epub 2024 Jun 13.
3
Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction.
用于高效编码和特征提取的自适应电路的规范和机制模型。
Proc Natl Acad Sci U S A. 2023 Jul 18;120(29):e2117484120. doi: 10.1073/pnas.2117484120. Epub 2023 Jul 10.
4
Neural learning rules for generating flexible predictions and computing the successor representation.用于生成灵活预测和计算后继表示的神经学习规则。
Elife. 2023 Mar 16;12:e80680. doi: 10.7554/eLife.80680.
5
Population codes enable learning from few examples by shaping inductive bias.群体编码通过塑造归纳偏差来实现从少数例子中学习。
Elife. 2022 Dec 16;11:e78606. doi: 10.7554/eLife.78606.
6
Structured random receptive fields enable informative sensory encodings.结构随机感受野实现信息丰富的感觉编码。
PLoS Comput Biol. 2022 Oct 10;18(10):e1010484. doi: 10.1371/journal.pcbi.1010484. eCollection 2022 Oct.
7
Applying the Properties of Neurons in Machine Learning: A Brain-like Neural Model with Interactive Stimulation for Data Classification.神经元特性在机器学习中的应用:一种具有交互式刺激的数据分类类脑神经模型。
Brain Sci. 2022 Sep 3;12(9):1191. doi: 10.3390/brainsci12091191.
8
Adaptive control of synaptic plasticity integrates micro- and macroscopic network function.突触可塑性的自适应控制整合了微观和宏观网络功能。
Neuropsychopharmacology. 2023 Jan;48(1):121-144. doi: 10.1038/s41386-022-01374-6. Epub 2022 Aug 29.
9
Self-healing codes: How stable neural populations can track continually reconfiguring neural representations.自修复代码:稳定的神经群体如何跟踪不断重新配置的神经表示。
Proc Natl Acad Sci U S A. 2022 Feb 15;119(7). doi: 10.1073/pnas.2106692119.
10
Place cells may simply be memory cells: Memory compression leads to spatial tuning and history dependence.位置细胞可能只是记忆细胞:记忆压缩导致空间调谐和历史依赖性。
Proc Natl Acad Sci U S A. 2021 Dec 21;118(51). doi: 10.1073/pnas.2018422118.