• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

亚优势密集簇允许离散突触神经网络进行简单学习和高计算性能。

Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses.

机构信息

Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy.

Human Genetics Foundation-Torino, Via Nizza 52, I-10126 Torino, Italy.

出版信息

Phys Rev Lett. 2015 Sep 18;115(12):128101. doi: 10.1103/PhysRevLett.115.128101.

DOI:10.1103/PhysRevLett.115.128101
PMID:26431018
Abstract

We show that discrete synaptic weights can be efficiently used for learning in large scale neural systems, and lead to unanticipated computational performance. We focus on the representative case of learning random patterns with binary synapses in single layer networks. The standard statistical analysis shows that this problem is exponentially dominated by isolated solutions that are extremely hard to find algorithmically. Here, we introduce a novel method that allows us to find analytical evidence for the existence of subdominant and extremely dense regions of solutions. Numerical experiments confirm these findings. We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions. These outcomes extend to synapses with multiple states and to deeper neural architectures. The large deviation measure also suggests how to design novel algorithmic schemes for optimization based on local entropy maximization.

摘要

我们表明,离散的突触权重可以有效地用于大规模神经网络中的学习,并带来意想不到的计算性能。我们专注于用二进制突触在单层网络中学习随机模式的代表性案例。标准的统计分析表明,这个问题受到孤立解的指数控制,这些孤立解极难通过算法找到。在这里,我们引入了一种新的方法,使我们能够找到存在亚主导和极其密集的解区域的分析证据。数值实验证实了这些发现。我们还表明,通过简单的学习协议可以非常容易地到达这些密集区域,并且这些突触配置比典型的解决方案更稳健,泛化效果更好。这些结果扩展到具有多个状态的突触和更深层次的神经架构。大偏差测度也为基于局部熵最大化的优化设计新的算法方案提供了思路。

相似文献

1
Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses.亚优势密集簇允许离散突触神经网络进行简单学习和高计算性能。
Phys Rev Lett. 2015 Sep 18;115(12):128101. doi: 10.1103/PhysRevLett.115.128101.
2
Role of Synaptic Stochasticity in Training Low-Precision Neural Networks.突触随机性在训练低精度神经网络中的作用。
Phys Rev Lett. 2018 Jun 29;120(26):268103. doi: 10.1103/PhysRevLett.120.268103.
3
Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes.学习神经网络的不合理有效性:从可达状态、稳健集成到基本算法方案
Proc Natl Acad Sci U S A. 2016 Nov 29;113(48):E7655-E7662. doi: 10.1073/pnas.1608103113. Epub 2016 Nov 15.
4
Learning may need only a few bits of synaptic precision.学习可能只需要少量的突触精度。
Phys Rev E. 2016 May;93(5):052313. doi: 10.1103/PhysRevE.93.052313. Epub 2016 May 27.
5
Typical and atypical solutions in nonconvex neural networks with discrete and continuous weights.具有离散和连续权重的非凸神经网络中的典型和非典型解决方案。
Phys Rev E. 2023 Aug;108(2-1):024310. doi: 10.1103/PhysRevE.108.024310.
6
Properties of the Geometry of Solutions and Capacity of Multilayer Neural Networks with Rectified Linear Unit Activations.具有修正线性单元激活函数的多层神经网络解的几何性质和容量。
Phys Rev Lett. 2019 Oct 25;123(17):170602. doi: 10.1103/PhysRevLett.123.170602.
7
Origin of the computational hardness for learning with binary synapses.二元突触学习的计算硬度起源。
Phys Rev E Stat Nonlin Soft Matter Phys. 2014 Nov;90(5-1):052813. doi: 10.1103/PhysRevE.90.052813. Epub 2014 Nov 17.
8
Efficient supervised learning in networks with binary synapses.具有二元突触的网络中的高效监督学习。
Proc Natl Acad Sci U S A. 2007 Jun 26;104(26):11079-84. doi: 10.1073/pnas.0700324104. Epub 2007 Jun 20.
9
Convergence of stochastic learning in perceptrons with binary synapses.具有二元突触的感知器中随机学习的收敛性。
Phys Rev E Stat Nonlin Soft Matter Phys. 2005 Jun;71(6 Pt 1):061907. doi: 10.1103/PhysRevE.71.061907. Epub 2005 Jun 16.
10
Synaptic dynamics: linear model and adaptation algorithm.突触动力学:线性模型与自适应算法。
Neural Netw. 2014 Aug;56:49-68. doi: 10.1016/j.neunet.2014.04.001. Epub 2014 Apr 28.

引用本文的文献

1
Cognition of Time and Thinking Beyond.时间认知与超越思维
Adv Exp Med Biol. 2024;1455:171-195. doi: 10.1007/978-3-031-60183-5_10.
2
PAC Bayesian Performance Guarantees for Deep (Stochastic) Networks in Medical Imaging.医学成像中深度(随机)网络的PAC贝叶斯性能保证
Med Image Comput Comput Assist Interv. 2021 Sep-Oct;12903:560-570. doi: 10.1007/978-3-030-87199-4_53. Epub 2021 Sep 21.
3
Variational Characterizations of Local Entropy and Heat Regularization in Deep Learning.深度学习中局部熵和热正则化的变分表征
Entropy (Basel). 2019 May 20;21(5):511. doi: 10.3390/e21050511.
4
Generalization properties of neural network approximations to frustrated magnet ground states.神经网络对受挫磁基态的逼近的推广性质。
Nat Commun. 2020 Mar 27;11(1):1593. doi: 10.1038/s41467-020-15402-w.
5
Shaping the learning landscape in neural networks around wide flat minima.围绕宽而平坦的极小值塑造神经网络的学习景观。
Proc Natl Acad Sci U S A. 2020 Jan 7;117(1):161-170. doi: 10.1073/pnas.1908636117. Epub 2019 Dec 23.
6
Optimization of neural networks via finite-value quantum fluctuations.通过有限值量子涨落优化神经网络。
Sci Rep. 2018 Jul 2;8(1):9950. doi: 10.1038/s41598-018-28212-4.
7
Efficiency of quantum vs. classical annealing in nonconvex learning problems.量子退火与经典退火在非凸学习问题中的效率比较。
Proc Natl Acad Sci U S A. 2018 Feb 13;115(7):1457-1462. doi: 10.1073/pnas.1711456115. Epub 2018 Jan 30.
8
Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes.学习神经网络的不合理有效性:从可达状态、稳健集成到基本算法方案
Proc Natl Acad Sci U S A. 2016 Nov 29;113(48):E7655-E7662. doi: 10.1073/pnas.1608103113. Epub 2016 Nov 15.