• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在线深度学习:一种随机块自适应梯度算法。

Online Learning for DNN Training: A Stochastic Block Adaptive Gradient Algorithm.

机构信息

School of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China.

Internet of Things & Smart City Innovation Platform, Zhuhai Fudan Innovation Institute, Zhuhai, China.

出版信息

Comput Intell Neurosci. 2022 Jun 2;2022:9337209. doi: 10.1155/2022/9337209. eCollection 2022.

DOI:10.1155/2022/9337209
PMID:35694581
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9184190/
Abstract

Adaptive algorithms are widely used because of their fast convergence rate for training deep neural networks (DNNs). However, the training cost becomes prohibitively expensive due to the computation of the full gradient when training complicated DNN. To reduce the computational cost, we present a stochastic block adaptive gradient online training algorithm in this study, called SBAG. In this algorithm, stochastic block coordinate descent and the adaptive learning rate are utilized at each iteration. We also prove that the regret bound of can be achieved via SBAG, in which is a time horizon. In addition, we use SBAG to train ResNet-34 and DenseNet-121 on CIFAR-10, respectively. The results demonstrate that SBAG has better training speed and generalized ability than other existing training methods.

摘要

自适应算法由于其在训练深度神经网络(DNN)方面的快速收敛速度而被广泛应用。然而,当训练复杂的 DNN 时,由于全梯度的计算,训练成本变得非常昂贵。为了降低计算成本,我们在本研究中提出了一种随机块自适应梯度在线训练算法,称为 SBAG。在这个算法中,随机块坐标下降和自适应学习率在每一次迭代中都会被使用。我们还证明了通过 SBAG 可以实现 的遗憾界,其中 是一个时间范围。此外,我们分别使用 SBAG 在 CIFAR-10 上训练 ResNet-34 和 DenseNet-121。结果表明,SBAG 具有比其他现有训练方法更好的训练速度和泛化能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/9b973f7ad316/CIN2022-9337209.alg.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/78adb642e618/CIN2022-9337209.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/9406c1449841/CIN2022-9337209.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/9d0c5d1c06fe/CIN2022-9337209.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/c755226cc420/CIN2022-9337209.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/9b973f7ad316/CIN2022-9337209.alg.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/78adb642e618/CIN2022-9337209.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/9406c1449841/CIN2022-9337209.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/9d0c5d1c06fe/CIN2022-9337209.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/c755226cc420/CIN2022-9337209.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3727/9184190/9b973f7ad316/CIN2022-9337209.alg.001.jpg

相似文献

1
Online Learning for DNN Training: A Stochastic Block Adaptive Gradient Algorithm.在线深度学习:一种随机块自适应梯度算法。
Comput Intell Neurosci. 2022 Jun 2;2022:9337209. doi: 10.1155/2022/9337209. eCollection 2022.
2
A multivariate adaptive gradient algorithm with reduced tuning efforts.一种具有较少调谐工作的多元自适应梯度算法。
Neural Netw. 2022 Aug;152:499-509. doi: 10.1016/j.neunet.2022.05.016. Epub 2022 May 21.
3
A Novel Learning Algorithm to Optimize Deep Neural Networks: Evolved Gradient Direction Optimizer (EVGO).一种优化深度神经网络的新型学习算法:进化梯度方向优化器(EVGO)。
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):685-694. doi: 10.1109/TNNLS.2020.2979121. Epub 2021 Feb 4.
4
Improving Deep Neural Networks' Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum.使用非线性共轭梯度风格的自适应动量改进深度神经网络的图像分类训练
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):12288-12300. doi: 10.1109/TNNLS.2023.3255783. Epub 2024 Sep 3.
5
Accelerating DNN Training Through Selective Localized Learning.通过选择性局部学习加速深度神经网络训练
Front Neurosci. 2022 Jan 11;15:759807. doi: 10.3389/fnins.2021.759807. eCollection 2021.
6
SSGD: SPARSITY-PROMOTING STOCHASTIC GRADIENT DESCENT ALGORITHM FOR UNBIASED DNN PRUNING.SSGD:用于无偏深度神经网络剪枝的稀疏性促进随机梯度下降算法
Proc IEEE Int Conf Acoust Speech Signal Process. 2020 May;2020:5410-5414. doi: 10.1109/icassp40776.2020.9054436. Epub 2020 May 14.
7
PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks.基于 PID 控制器的深度神经网络随机优化加速。
IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5079-5091. doi: 10.1109/TNNLS.2019.2963066. Epub 2020 Nov 30.
8
Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.离散时域递归神经网络的鲁棒自适应梯度下降训练算法
IEEE Trans Neural Netw. 2008 Nov;19(11):1841-53. doi: 10.1109/TNN.2008.2001923.
9
Representational Distance Learning for Deep Neural Networks.深度神经网络的代表性距离学习
Front Comput Neurosci. 2016 Dec 27;10:131. doi: 10.3389/fncom.2016.00131. eCollection 2016.
10
Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks.用于深度和物理感知神经网络的具有斜率恢复的局部自适应激活函数。
Proc Math Phys Eng Sci. 2020 Jul;476(2239):20200334. doi: 10.1098/rspa.2020.0334. Epub 2020 Jul 15.