• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

ASD+M:随机优化和在线学习中的自动参数调整。

ASD+M: Automatic parameter tuning in stochastic optimization and on-line learning.

机构信息

Warsaw University of Technology, Institute of Computer Science, Nowowiejska 15/19, 00-665 Warsaw, Poland.

出版信息

Neural Netw. 2017 Dec;96:1-10. doi: 10.1016/j.neunet.2017.07.007. Epub 2017 Sep 7.

DOI:10.1016/j.neunet.2017.07.007
PMID:28950104
Abstract

In this paper the classic momentum algorithm for stochastic optimization is considered. A method is introduced that adjusts coefficients for this algorithm during its operation. The method does not depend on any preliminary knowledge of the optimization problem. In the experimental study, the method is applied to on-line learning in feed-forward neural networks, including deep auto-encoders, and outperforms any fixed coefficients. The method eliminates coefficients that are difficult to determine, with profound influence on performance. While the method itself has some coefficients, they are ease to determine and sensitivity of performance to them is low. Consequently, the method makes on-line learning a practically parameter-free process and broadens the area of potential application of this technology.

摘要

本文考虑了用于随机优化的经典动量算法。引入了一种在算法运行过程中调整系数的方法。该方法不依赖于优化问题的任何先验知识。在实验研究中,该方法应用于前馈神经网络(包括深度自动编码器)的在线学习,并优于任何固定系数。该方法消除了难以确定的系数,对性能有深远的影响。虽然该方法本身有一些系数,但它们易于确定,并且性能对它们的敏感性较低。因此,该方法使得在线学习成为一个实际上无参数的过程,并拓宽了该技术的潜在应用领域。

相似文献

1
ASD+M: Automatic parameter tuning in stochastic optimization and on-line learning.ASD+M:随机优化和在线学习中的自动参数调整。
Neural Netw. 2017 Dec;96:1-10. doi: 10.1016/j.neunet.2017.07.007. Epub 2017 Sep 7.
2
Research on a learning rate with energy index in deep learning.深度学习中能量指数学习率的研究。
Neural Netw. 2019 Feb;110:225-231. doi: 10.1016/j.neunet.2018.12.009. Epub 2018 Dec 19.
3
Ensemble Neural Networks (ENN): A gradient-free stochastic method.集成神经网络(ENN):一种无梯度随机方法。
Neural Netw. 2019 Feb;110:170-185. doi: 10.1016/j.neunet.2018.11.009. Epub 2018 Dec 3.
4
Neural network training as a dissipative process.神经网络训练作为耗散过程。
Neural Netw. 2016 Sep;81:72-80. doi: 10.1016/j.neunet.2016.05.005. Epub 2016 Jun 21.
5
A universal deep learning approach for modeling the flow of patients under different severities.一种通用的深度学习方法,用于对不同严重程度的患者进行建模。
Comput Methods Programs Biomed. 2018 Feb;154:191-203. doi: 10.1016/j.cmpb.2017.11.003. Epub 2017 Nov 7.
6
Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.基于深度神经网络的交通标志识别系统:空间转换器和随机优化方法分析。
Neural Netw. 2018 Mar;99:158-165. doi: 10.1016/j.neunet.2018.01.005. Epub 2018 Jan 31.
7
Stochastic learning via optimizing the variational inequalities.通过优化变分不等式进行随机学习。
IEEE Trans Neural Netw Learn Syst. 2014 Oct;25(10):1769-78. doi: 10.1109/TNNLS.2013.2294741.
8
A fast and scalable recurrent neural network based on stochastic meta descent.一种基于随机元下降的快速且可扩展的递归神经网络。
IEEE Trans Neural Netw. 2008 Sep;19(9):1652-8. doi: 10.1109/TNN.2008.2000838.
9
Adaptive natural gradient learning algorithms for various stochastic models.针对各种随机模型的自适应自然梯度学习算法。
Neural Netw. 2000 Sep;13(7):755-64. doi: 10.1016/s0893-6080(00)00051-4.
10
Learning curves for stochastic gradient descent in linear feedforward networks.线性前馈网络中随机梯度下降的学习曲线。
Neural Comput. 2005 Dec;17(12):2699-718. doi: 10.1162/089976605774320539.

引用本文的文献

1
Classification of Alzheimer's Disease Based on Eight-Layer Convolutional Neural Network with Leaky Rectified Linear Unit and Max Pooling.基于带泄露整流线性单元和最大池化的八层卷积神经网络的阿尔茨海默病分类。
J Med Syst. 2018 Mar 26;42(5):85. doi: 10.1007/s10916-018-0932-7.