• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

突触随机性在训练低精度神经网络中的作用。

Role of Synaptic Stochasticity in Training Low-Precision Neural Networks.

机构信息

Bocconi Institute for Data Science and Analytics, Bocconi University, Milano 20136, Italy.

Italian Institute for Genomic Medicine, Torino 10126, Italy.

出版信息

Phys Rev Lett. 2018 Jun 29;120(26):268103. doi: 10.1103/PhysRevLett.120.268103.

DOI:10.1103/PhysRevLett.120.268103
PMID:30004730
Abstract

Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes. Here we show that a neural network model with stochastic binary weights naturally gives prominence to exponentially rare dense regions of solutions with a number of desirable properties such as robustness and good generalization performance, while typical solutions are isolated and hard to find. Binary solutions of the standard perceptron problem are obtained from a simple gradient descent procedure on a set of real values parametrizing a probability distribution over the binary synapses. Both analytical and numerical results are presented. An algorithmic extension that allows to train discrete deep neural networks is also investigated.

摘要

在神经网络模型中,突触权重的随机性和有限精度是学习过程的生物和硬件建模的关键方面。在这里,我们表明,具有随机二进制权重的神经网络模型自然会突出具有许多理想特性(如鲁棒性和良好的泛化性能)的解的指数罕见密集区域,而典型的解则是孤立的且难以找到。标准感知器问题的二进制解是通过在一组实数上进行简单的梯度下降过程获得的,该实数参数化了二进制突触上的概率分布。本文同时给出了理论和数值结果。还研究了一种允许训练离散深度神经网络的算法扩展。

相似文献

1
Role of Synaptic Stochasticity in Training Low-Precision Neural Networks.突触随机性在训练低精度神经网络中的作用。
Phys Rev Lett. 2018 Jun 29;120(26):268103. doi: 10.1103/PhysRevLett.120.268103.
2
Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses.亚优势密集簇允许离散突触神经网络进行简单学习和高计算性能。
Phys Rev Lett. 2015 Sep 18;115(12):128101. doi: 10.1103/PhysRevLett.115.128101.
3
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
4
Properties of the Geometry of Solutions and Capacity of Multilayer Neural Networks with Rectified Linear Unit Activations.具有修正线性单元激活函数的多层神经网络解的几何性质和容量。
Phys Rev Lett. 2019 Oct 25;123(17):170602. doi: 10.1103/PhysRevLett.123.170602.
5
Constructive training methods for feedforward neural networks with binary weights.具有二进制权重的前馈神经网络的建设性训练方法。
Int J Neural Syst. 1996 May;7(2):149-66. doi: 10.1142/s0129065796000129.
6
Convergence of stochastic learning in perceptrons with binary synapses.具有二元突触的感知器中随机学习的收敛性。
Phys Rev E Stat Nonlin Soft Matter Phys. 2005 Jun;71(6 Pt 1):061907. doi: 10.1103/PhysRevE.71.061907. Epub 2005 Jun 16.
7
Learning may need only a few bits of synaptic precision.学习可能只需要少量的突触精度。
Phys Rev E. 2016 May;93(5):052313. doi: 10.1103/PhysRevE.93.052313. Epub 2016 May 27.
8
Learning through atypical phase transitions in overparameterized neural networks.通过过参数化神经网络中的非典型相变进行学习。
Phys Rev E. 2022 Jul;106(1-1):014116. doi: 10.1103/PhysRevE.106.014116.
9
Origin of the computational hardness for learning with binary synapses.二元突触学习的计算硬度起源。
Phys Rev E Stat Nonlin Soft Matter Phys. 2014 Nov;90(5-1):052813. doi: 10.1103/PhysRevE.90.052813. Epub 2014 Nov 17.
10
A Learning Framework for Winner-Take-All Networks with Stochastic Synapses.具有随机突触的胜者全得网络的学习框架。
Neural Comput. 2018 Jun;30(6):1542-1572. doi: 10.1162/neco_a_01080. Epub 2018 Apr 13.

引用本文的文献

1
Shaping the learning landscape in neural networks around wide flat minima.围绕宽而平坦的极小值塑造神经网络的学习景观。
Proc Natl Acad Sci U S A. 2020 Jan 7;117(1):161-170. doi: 10.1073/pnas.1908636117. Epub 2019 Dec 23.
2
A high-bias, low-variance introduction to Machine Learning for physicists.面向物理学家的机器学习高偏差、低方差入门介绍。
Phys Rep. 2019 May 30;810:1-124. doi: 10.1016/j.physrep.2019.03.001. Epub 2019 Mar 14.