• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

隐藏超图、纠错码与霍普菲尔德网络中的临界学习

Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks.

作者信息

Hillar Christopher, Chan Tenzin, Taubman Rachel, Rolnick David

机构信息

Awecom, Inc., San Francisco, CA 94103, USA.

Singapore University of Technology and Design, Singapore 487372, Singapore.

出版信息

Entropy (Basel). 2021 Nov 11;23(11):1494. doi: 10.3390/e23111494.

DOI:10.3390/e23111494
PMID:34828192
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8622935/
Abstract

In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover -node networks with robust storage of 2Ω(n1-ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.

摘要

1943年,麦卡洛克和皮茨引入了一种离散递归神经网络作为大脑计算模型。这项工作激发了诸如第一台计算机设计和有限自动机理论等突破。我们专注于霍普菲尔德网络中的学习,这是一种具有对称权重和定点吸引子动力学的特殊情况。具体而言,我们探索最小能量流(MEF)作为确定网络参数的可扩展凸目标。我们列举了MEF的各种属性,如生物学合理性,然后与学习理论中的经典方法进行比较。经过训练的霍普菲尔德网络可以执行无监督聚类并定义新颖的纠错编码方案。它们还能在图论中高效地找到隐藏结构(团)。我们将这种已知的从图到超图的联系进行扩展,并发现对于任何ϵ>0都具有2Ω(n1-ϵ)个稳健存储记忆的节点网络。在图的情况下,我们还确定了网络完全泛化的训练样本的临界比率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/3c35d8f8508c/entropy-23-01494-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/2643e24e73fd/entropy-23-01494-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/1c52a3d70ddb/entropy-23-01494-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/4119155582ff/entropy-23-01494-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/5a01ee911c86/entropy-23-01494-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/3c35d8f8508c/entropy-23-01494-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/2643e24e73fd/entropy-23-01494-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/1c52a3d70ddb/entropy-23-01494-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/4119155582ff/entropy-23-01494-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/5a01ee911c86/entropy-23-01494-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/3c35d8f8508c/entropy-23-01494-g005.jpg

相似文献

1
Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks.隐藏超图、纠错码与霍普菲尔德网络中的临界学习
Entropy (Basel). 2021 Nov 11;23(11):1494. doi: 10.3390/e23111494.
2
Robust Exponential Memory in Hopfield Networks.霍普菲尔德网络中的稳健指数记忆
J Math Neurosci. 2018 Jan 16;8(1):1. doi: 10.1186/s13408-017-0056-2.
3
Beyond the Maximum Storage Capacity Limit in Hopfield Recurrent Neural Networks.超越霍普菲尔德递归神经网络中的最大存储容量限制。
Entropy (Basel). 2019 Jul 25;21(8):726. doi: 10.3390/e21080726.
4
The storage capacity of a directed graph and nodewise autonomous, ubiquitous learning.有向图的存储容量与节点自主、普适学习。
Front Comput Neurosci. 2023 Oct 19;17:1254355. doi: 10.3389/fncom.2023.1254355. eCollection 2023.
5
On the Maximum Storage Capacity of the Hopfield Model.关于霍普菲尔德模型的最大存储容量
Front Comput Neurosci. 2017 Jan 10;10:144. doi: 10.3389/fncom.2016.00144. eCollection 2016.
6
Enhanced storage capacity with errors in scale-free Hopfield neural networks: An analytical study.具有无标度霍普菲尔德神经网络误差的增强存储容量:一项分析研究。
PLoS One. 2017 Oct 27;12(10):e0184683. doi: 10.1371/journal.pone.0184683. eCollection 2017.
7
In Search of Dispersed Memories: Generative Diffusion Models Are Associative Memory Networks.寻找分散记忆:生成扩散模型是联想记忆网络。
Entropy (Basel). 2024 Apr 29;26(5):381. doi: 10.3390/e26050381.
8
Robust computation with rhythmic spike patterns.具有节律性尖峰模式的鲁棒计算。
Proc Natl Acad Sci U S A. 2019 Sep 3;116(36):18050-18059. doi: 10.1073/pnas.1902653116. Epub 2019 Aug 20.
9
Modern Hopfield Networks for graph embedding.用于图嵌入的现代霍普菲尔德网络。
Front Big Data. 2022 Nov 17;5:1044709. doi: 10.3389/fdata.2022.1044709. eCollection 2022.
10
Universal Hopfield Networks: A General Framework for Single-Shot Associative Memory Models.通用霍普菲尔德网络:单触发联想记忆模型的通用框架
Proc Mach Learn Res. 2022 Jul;162:15561-15583.

引用本文的文献

1
Detecting Signatures of Criticality Using Divergence Rate.利用发散率检测临界性特征
Entropy (Basel). 2025 Apr 30;27(5):487. doi: 10.3390/e27050487.
2
Photonic Stochastic Emergent Storage for deep classification by scattering-intrinsic patterns.用于深度分类的基于散射固有模式的光子随机突发存储
Nat Commun. 2024 Jan 13;15(1):505. doi: 10.1038/s41467-023-44498-z.

本文引用的文献

1
Clustering Hidden Markov Models With Variational Bayesian Hierarchical EM.变分贝叶斯层次 EM 聚类隐马尔可夫模型。
IEEE Trans Neural Netw Learn Syst. 2023 Mar;34(3):1537-1551. doi: 10.1109/TNNLS.2021.3105570. Epub 2023 Feb 28.
2
Recurrence is required to capture the representational dynamics of the human visual system.为了捕捉人类视觉系统的表示动态,需要进行再现。
Proc Natl Acad Sci U S A. 2019 Oct 22;116(43):21854-21863. doi: 10.1073/pnas.1905544116. Epub 2019 Oct 7.
3
Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior.
证据表明,循环回路对于腹侧流执行核心物体识别行为至关重要。
Nat Neurosci. 2019 Jun;22(6):974-983. doi: 10.1038/s41593-019-0392-5. Epub 2019 Apr 29.
4
Unsupervised clustering of temporal patterns in high-dimensional neuronal ensembles using a novel dissimilarity measure.基于新型相似度测度的高维神经元集合中时间模式的无监督聚类。
PLoS Comput Biol. 2018 Jul 6;14(7):e1006283. doi: 10.1371/journal.pcbi.1006283. eCollection 2018 Jul.
5
An introduction to the maximum entropy approach and its application to inference problems in biology.最大熵方法及其在生物学推理问题中的应用简介。
Heliyon. 2018 Apr 13;4(4):e00596. doi: 10.1016/j.heliyon.2018.e00596. eCollection 2018 Apr.
6
Robust Exponential Memory in Hopfield Networks.霍普菲尔德网络中的稳健指数记忆
J Math Neurosci. 2018 Jan 16;8(1):1. doi: 10.1186/s13408-017-0056-2.
7
Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.临界性与学习相遇:自组织递归神经网络中的临界性特征
PLoS One. 2017 May 26;12(5):e0178683. doi: 10.1371/journal.pone.0178683. eCollection 2017.
8
Stochastic relaxation, gibbs distributions, and the bayesian restoration of images.随机松弛,吉布斯分布,以及贝叶斯图像恢复。
IEEE Trans Pattern Anal Mach Intell. 1984 Jun;6(6):721-41. doi: 10.1109/tpami.1984.4767596.
9
New method for parameter estimation in probabilistic models: minimum probability flow.概率模型中参数估计的新方法:最小概率流。
Phys Rev Lett. 2011 Nov 25;107(22):220601. doi: 10.1103/PhysRevLett.107.220601. Epub 2011 Nov 21.
10
The structure of large-scale synchronized firing in primate retina.灵长类视网膜中大规模同步放电的结构。
J Neurosci. 2009 Apr 15;29(15):5022-31. doi: 10.1523/JNEUROSCI.5187-08.2009.