文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

增强型梯度用于训练受限玻尔兹曼机。

Enhanced gradient for training restricted Boltzmann machines.

机构信息

Department of Information and Computer Science, Aalto University School of Science, Espoo, Uusimaa 02150, Finland.

出版信息

Neural Comput. 2013 Mar;25(3):805-31. doi: 10.1162/NECO_a_00397. Epub 2012 Nov 13.


DOI:10.1162/NECO_a_00397
PMID:23148412
Abstract

Restricted Boltzmann machines (RBMs) are often used as building blocks in greedy learning of deep networks. However, training this simple model can be laborious. Traditional learning algorithms often converge only with the right choice of metaparameters that specify, for example, learning rate scheduling and the scale of the initial weights. They are also sensitive to specific data representation. An equivalent RBM can be obtained by flipping some bits and changing the weights and biases accordingly, but traditional learning rules are not invariant to such transformations. Without careful tuning of these training settings, traditional algorithms can easily get stuck or even diverge. In this letter, we present an enhanced gradient that is derived to be invariant to bit-flipping transformations. We experimentally show that the enhanced gradient yields more stable training of RBMs both when used with a fixed learning rate and an adaptive one.

摘要

受限玻尔兹曼机(RBM)通常用作深度网络贪婪学习的构建块。但是,训练这个简单的模型可能很麻烦。传统的学习算法通常只有在选择合适的超参数时才会收敛,例如学习率调度和初始权重的规模。它们也对特定的数据表示敏感。通过翻转一些位并相应地更改权重和偏差,可以获得等效的 RBM,但传统的学习规则对此类变换并不不变。如果不仔细调整这些训练设置,传统算法很容易陷入困境甚至发散。在这封信中,我们提出了一种增强的梯度,该梯度被推导出对位翻转变换是不变的。我们通过实验表明,增强的梯度在使用固定学习率和自适应学习率时都能更稳定地训练 RBM。

相似文献

[1]
Enhanced gradient for training restricted Boltzmann machines.

Neural Comput. 2012-11-13

[2]
Representational power of restricted boltzmann machines and deep belief networks.

Neural Comput. 2008-6

[3]
Where do features come from?

Cogn Sci. 2014-8

[4]
Approximate learning algorithm in Boltzmann machines.

Neural Comput. 2009-11

[5]
Bounding the bias of contrastive divergence learning.

Neural Comput. 2011-3

[6]
Accelerating deep learning with memcomputing.

Neural Netw. 2018-11-3

[7]
Measuring the usefulness of hidden units in Boltzmann machines with mutual information.

Neural Netw. 2014-9-28

[8]
Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units.

Neural Netw. 2016-4-12

[9]
Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks.

Neural Comput. 2010-10

[10]
Refinements of universal approximation results for deep belief networks and restricted Boltzmann machines.

Neural Comput. 2011-2-7

引用本文的文献

[1]
Regularization, Bayesian Inference, and Machine Learning Methods for Inverse Problems.

Entropy (Basel). 2021-12-13

[2]
A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning.

Med Image Anal. 2019-5

[3]
A Gestalt inference model for auditory scene segregation.

PLoS Comput Biol. 2019-1-22

[4]
Deep Learning for Computer Vision: A Brief Review.

Comput Intell Neurosci. 2018-2-1

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索