• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

赫布型学习规则能否避免稀疏分布数据的维度灾难?

Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?

机构信息

Department of Computer Science and Engineering, INESC-ID & Instituto Superior Técnico - University of Lisbon, Av. Prof. Dr. Aníbal Cavaco Silva, Porto Salvo, 2744-016, Lisbon, Portugal.

出版信息

Biol Cybern. 2024 Dec;118(5-6):267-276. doi: 10.1007/s00422-024-00995-y. Epub 2024 Sep 9.

DOI:10.1007/s00422-024-00995-y
PMID:39249119
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11588804/
Abstract

It is generally assumed that the brain uses something akin to sparse distributed representations. These representations, however, are high-dimensional and consequently they affect classification performance of traditional Machine Learning models due to the "curse of dimensionality". In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm. The brain, however, seems to be able to solve the problem with few layers. In this work, we hypothesize that this happens by using Hebbian learning. Actually, the Hebbian-like learning rule of Restricted Boltzmann Machines learns the input patterns asymmetrically. It exclusively learns the correlation between non-zero values and ignores the zeros, which represent the vast majority of the input dimensionality. By ignoring the zeros the "curse of dimensionality" problem can be avoided. To test our hypothesis, we generated several sparse datasets and compared the performance of a Restricted Boltzmann Machine classifier with some Backprop-trained networks. The experiments using these codes confirm our initial intuition as the Restricted Boltzmann Machine shows a good generalization performance, while the Neural Networks trained with the backpropagation algorithm overfit the training data.

摘要

人们普遍认为大脑使用类似于稀疏分布式表示的方法。然而,这些表示是高维的,因此由于“维度灾难”,它们会影响传统机器学习模型的分类性能。在有大量标记数据的任务中,深度网络似乎通过使用许多层和非Hebbian反向传播算法来解决这个问题。然而,大脑似乎可以通过使用少量的层来解决这个问题。在这项工作中,我们假设这是通过Hebbian 学习来实现的。实际上,受限玻尔兹曼机的Hebbian 学习规则不对称地学习输入模式。它专门学习非零值之间的相关性,而忽略了零值,零值代表输入维度的绝大多数。通过忽略零值,可以避免“维度灾难”问题。为了验证我们的假设,我们生成了几个稀疏数据集,并比较了受限玻尔兹曼机分类器与一些反向传播训练网络的性能。使用这些代码进行的实验证实了我们最初的直觉,因为受限玻尔兹曼机表现出良好的泛化性能,而使用反向传播算法训练的神经网络过度拟合训练数据。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/5cfebb817ce3/422_2024_995_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/ee5f7a8da39f/422_2024_995_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/857e0cdd7cc6/422_2024_995_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/da0543fd0590/422_2024_995_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/53dedae7b02c/422_2024_995_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/b62e27c88cf8/422_2024_995_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/5cfebb817ce3/422_2024_995_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/ee5f7a8da39f/422_2024_995_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/857e0cdd7cc6/422_2024_995_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/da0543fd0590/422_2024_995_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/53dedae7b02c/422_2024_995_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/b62e27c88cf8/422_2024_995_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6620/11588804/5cfebb817ce3/422_2024_995_Fig6_HTML.jpg

相似文献

1
Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?赫布型学习规则能否避免稀疏分布数据的维度灾难?
Biol Cybern. 2024 Dec;118(5-6):267-276. doi: 10.1007/s00422-024-00995-y. Epub 2024 Sep 9.
2
Where do features come from?特征从何而来?
Cogn Sci. 2014 Aug;38(6):1078-101. doi: 10.1111/cogs.12049. Epub 2013 Jun 25.
3
Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs.预测编码可沿任意计算图逼近反向传播。
Neural Comput. 2022 May 19;34(6):1329-1368. doi: 10.1162/neco_a_01497.
4
Hebbian semi-supervised learning in a sample efficiency setting.Hebbian 半监督学习在样本效率设置下。
Neural Netw. 2021 Nov;143:719-731. doi: 10.1016/j.neunet.2021.08.003. Epub 2021 Aug 13.
5
Contrastive Hebbian Feedforward Learning for Neural Networks.对比Hebbian 前馈学习在神经网络中的应用。
IEEE Trans Neural Netw Learn Syst. 2020 Jun;31(6):2118-2128. doi: 10.1109/TNNLS.2019.2927957. Epub 2019 Jul 31.
6
Sparse coding with a somato-dendritic rule.具有树突规则的稀疏编码。
Neural Netw. 2020 Nov;131:37-49. doi: 10.1016/j.neunet.2020.06.007. Epub 2020 Jun 26.
7
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
8
Contrastive Hebbian learning with random feedback weights.对比随机反馈权重的Hebbian 学习。
Neural Netw. 2019 Jun;114:1-14. doi: 10.1016/j.neunet.2019.01.008. Epub 2019 Feb 21.
9
Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks?为什么相似性匹配目标会导致赫布式/反赫布式网络?
Neural Comput. 2018 Jan;30(1):84-124. doi: 10.1162/neco_a_01018. Epub 2017 Sep 28.
10
A clinical text classification paradigm using weak supervision and deep representation.一种使用弱监督和深度表示的临床文本分类范式。
BMC Med Inform Decis Mak. 2019 Jan 7;19(1):1. doi: 10.1186/s12911-018-0723-6.

本文引用的文献

1
The curse(s) of dimensionality.维度诅咒
Nat Methods. 2018 Jun;15(6):399-400. doi: 10.1038/s41592-018-0019-x.
2
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
3
Concept cells: the building blocks of declarative memory functions.概念细胞:陈述性记忆功能的构建块。
Nat Rev Neurosci. 2012 Jul 4;13(8):587-97. doi: 10.1038/nrn3251.
4
A fast learning algorithm for deep belief nets.一种用于深度信念网络的快速学习算法。
Neural Comput. 2006 Jul;18(7):1527-54. doi: 10.1162/neco.2006.18.7.1527.
5
Training products of experts by minimizing contrastive divergence.通过最小化对比散度来训练专家的产品。
Neural Comput. 2002 Aug;14(8):1771-800. doi: 10.1162/089976602760128018.
6
Conceptual structure and the structure of concepts: a distributed account of category-specific deficits.概念结构与概念的结构:对类别特异性缺陷的分布式解释。
Brain Lang. 2000 Nov;75(2):195-231. doi: 10.1006/brln.2000.2353.
7
Willshaw model: Associative memory with sparse coding and low firing rates.
Phys Rev A. 1990 Feb 15;41(4):1843-1854. doi: 10.1103/physreva.41.1843.