• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对比Hebbian 前馈学习在神经网络中的应用。

Contrastive Hebbian Feedforward Learning for Neural Networks.

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Jun;31(6):2118-2128. doi: 10.1109/TNNLS.2019.2927957. Epub 2019 Jul 31.

DOI:10.1109/TNNLS.2019.2927957
PMID:31380771
Abstract

This paper addresses the biological plausibility of both backpropagation (BP) and contrastive Hebbian learning (CHL) used in the Boltzmann machines. The main claim of this paper is that CHL is a general learning algorithm that can be used to steer feedforward networks toward desirable outcomes, and steer them away from undesirable outcomes without any need for the specialized feedback circuit of BP or the symmetric connections used by the Boltzmann machines. After adding perturbations during the learning phase to all the neurons in the network, multiple feedforward outcomes are classified into Hebbian and anti-Hebbian sets based on the network predictions. The algorithm is applied to networks when optimizing a loss objective where BP excels and is also applied to networks with stochastic binary outputs where BP cannot be easily applied. The power of the proposed algorithm lies in its simplicity where both learning and gradient estimation through stochastic binary activations are combined into a single local Hebbian rule. We will also show that both Hebbian and anti-Hebbian correlations are evaluated from the readily available signals that are fundamentally different from CHL used in the Boltzmann machines. We will demonstrate that the new learning paradigm where Hebbian/anti-Hebbian correlations are based on correct/incorrect predictions is a powerful concept that separates this paper from other biologically inspired learning algorithms.

摘要

本文探讨了反向传播(BP)和对比海布学习(CHL)在玻尔兹曼机中的生物学合理性。本文的主要观点是,CHL 是一种通用的学习算法,可用于引导前馈网络朝着理想的结果发展,并引导它们远离不理想的结果,而无需 BP 的专用反馈电路或玻尔兹曼机使用的对称连接。在学习阶段向网络中的所有神经元添加扰动后,根据网络预测将多个前馈结果分类为海布和反海布集。该算法在优化 BP 擅长的损失目标的网络时得到应用,也在具有随机二进制输出的网络中得到应用,在这些网络中 BP 不易应用。该算法的强大之处在于其简单性,通过随机二进制激活进行学习和梯度估计都组合成一个单一的局部海布规则。我们还将表明,海布和反海布相关都可以从根本上不同于玻尔兹曼机中使用的 CHL 的现成信号中进行评估。我们将证明,基于正确/错误预测的新学习范式,其中海布/反海布相关是基于正确/错误预测的,是一个强大的概念,将本文与其他受生物启发的学习算法区分开来。

相似文献

1
Contrastive Hebbian Feedforward Learning for Neural Networks.对比Hebbian 前馈学习在神经网络中的应用。
IEEE Trans Neural Netw Learn Syst. 2020 Jun;31(6):2118-2128. doi: 10.1109/TNNLS.2019.2927957. Epub 2019 Jul 31.
2
Contrastive Hebbian learning with random feedback weights.对比随机反馈权重的Hebbian 学习。
Neural Netw. 2019 Jun;114:1-14. doi: 10.1016/j.neunet.2019.01.008. Epub 2019 Feb 21.
3
Equivalence of backpropagation and contrastive Hebbian learning in a layered network.分层网络中反向传播与对比赫布学习的等效性。
Neural Comput. 2003 Feb;15(2):441-54. doi: 10.1162/089976603762552988.
4
Contrastive Similarity Matching for Supervised Learning.监督学习中的对比相似性匹配。
Neural Comput. 2021 Apr 13;33(5):1300-1328. doi: 10.1162/neco_a_01374.
5
Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks.深度网络中自监督学习的生物学合理训练机制
Front Comput Neurosci. 2022 Mar 21;16:789253. doi: 10.3389/fncom.2022.789253. eCollection 2022.
6
Deep Learning With Asymmetric Connections and Hebbian Updates.具有非对称连接和赫布更新的深度学习
Front Comput Neurosci. 2019 Apr 4;13:18. doi: 10.3389/fncom.2019.00018. eCollection 2019.
7
Biologically-inspired neuronal adaptation improves learning in neural networks.受生物启发的神经元适应性改善神经网络中的学习。
Commun Integr Biol. 2023 Jan 17;16(1):2163131. doi: 10.1080/19420889.2022.2163131. eCollection 2023.
8
Learning by Asymmetric Parallel Boltzmann Machines.通过非对称并行玻尔兹曼机进行学习。
Neural Comput. 1991 Fall;3(3):402-408. doi: 10.1162/neco.1991.3.3.402.
9
A theory of local learning, the learning channel, and the optimality of backpropagation.一种关于局部学习、学习通道及反向传播最优性的理论。
Neural Netw. 2016 Nov;83:51-74. doi: 10.1016/j.neunet.2016.07.006. Epub 2016 Aug 5.
10
Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations.使用广义赫布规则初始化内部表示以加速前馈神经网络的训练。
IEEE Trans Neural Netw. 1996;7(2):419-26. doi: 10.1109/72.485677.

引用本文的文献

1
The information theory of developmental pruning: Optimizing global network architectures using local synaptic rules.发育修剪的信息理论:使用局部突触规则优化全局网络结构。
PLoS Comput Biol. 2021 Oct 11;17(10):e1009458. doi: 10.1371/journal.pcbi.1009458. eCollection 2021 Oct.